Ten Years of IPv6, Maybe?

It seems like it’s now a tradition that, once a year, I will be ranting about IPv6. This usually happens because either I’m trying to do something involving IPv6 and I get stumped, or someone finds one of my old blog posts and complains about it. This time is different, to a point. Yes I sometimes throw some of my older post out there, and they receive criticism in the form of “it’s from 2015” – which people think is relevant, but isn’t, since nothing really changed – but the occasion this year is celebrating the ten years anniversary for the World IPv6 Day, the so-called 24-hour test of IPv6 from the big players of network services (including, but not limited to, my current and past employer).

For those who weren’t around or aware of what was going on at the time, this was a one-time event in which a number of companies and sites organized to start publishing AAAA (IPv6) records for their main hostnames for a day. Previously, a number of test hostnames existed, such as ipv6.google.com, so if you wanted to work on IPv6 tech you could, but you had to go and look for it. The whole point of the single day test was to make sure that users wouldn’t notice if they started using the v6 version of the websites — though as history will tell us now, a day was definitely not enough to find that many of the issues around it.

For most of these companies it wasn’t until the following year, on 2012-06-06, that IPv6 “stayed on” on their main domains and hostnames, which should have given enough time to address whatever might have come out of the one day test. For a few, such as OVH, the test looked good enough to keep IPv6 deployed afterwards, and that gave a few of us a preview of the years to come.

I took part to the test day (as well as the launch) — at the time I was exploring options for getting IPv6 working in Italy through tunnels, and I tried a number of different options: Teredo, 6to4, and eventually Hurricane Electric. If you’ve been around enough in those circle you may be confused by my lack of Sixxs as an option — I have encountered their BOFH side of things, and got my account closed for signing up with my Gmail address (that was before I started using Gmail For Business on my own domain). Even when I was told that if I signed up with my Gentoo address I would have had extra credits, I didn’t want to deal with that behaviour, so I just skipped on the option.

So ten years on, what lessons did I learn about IPv6?

It’s A Full Stack World

I’ve had a number of encounters with self-defined Network Engineers, who think that IPv6 just needs to be supported at the network level. If your ISP supports IPv6, you’re good to go. This is just wrong, and shouldn’t even need to be debated, but here we are.

Not only supporting IPv6 requires using slightly different network primitives at times – after all, Gentoo has had an ipv6 USE flag for years – but you need to make sure anything that consumes IP addresses throughout your application knows how to deal with IPv6. For an example, take my old post about Telegram’s IPv6 failures.

As far as I know their issue is solved, but it’s far from uncommon — after all it’s an obvious trick to feed legacy applications a fake IPv4 if you can’t adapt them quickly enough to IPv6. If they’re not actually using it to initiate a connection, but only using it for (short-term) session retrieval or logging, you can get away with this until you replace or lift the backend of a network application. Unfortunately that doesn’t work well when the address is showed back to the user — and the same is true for when the IP needs to be logged for auditing or security purposes: you cannot map arbitrary IPv6 into a 32-bit address space, so while you may be able to provide a temporary session identifier, you would need to have something mapping the session time and the 32-bit identifier together, to match the original source of the request.

Another example of where the difference between IPv4 and IPv6 might cause hard to spot issues is in anonymisation. Now, I’m not a privacy engineer and I won’t suggest that I’ve got a lot of experience in the field, but I have seen attempts at “anonymising” user IPv4s by storing (or showing) only the first three octets of it. Beside the fact that this doesn’t work if you are trying to match up people within a small pool (getting to the ISP would be plenty enough in some of those cases), this does not work with IPv6 at all — you can have 120 of the 128 bits of it and still pretty much being able to identify a single individual.

You’re Only As Prepared As Your Dependencies

This is pretty much a truism in software engineering in general, but it might surprise people that this applies to IPv6 even outside of the “dependencies” you see as part of your code. Many network applications are frontends or backends to something else, and in today’s world, with most things being web applications, this is true for cross-company services too.

When you’re providing a service to one user, but rely on a third party to provide you a service related to that user, it may very well be the case that IPv6 will get in your way. Don’t believe me? Go back and read my story about OVH. What happened there is actually a lot more common than you would think: whether it is payment processors, advertisers, analytics, or other third party backends, it’s not uncommon to assume that you can match the session by the source address and time (although that is always very sketchy, as you may be using Tor, or any other setup where requests to different hosts are routed differently.

Things get even more complicated as time goes by. Let’s take another look at the example of OVH (knowing full well that it was 10 years ago): the problem there was not that the processor didn’t support IPv6 – though it didn’t – the problem was that the communication between OVH (v6) and the payment processor (v4) broke down. It’s perfectly reasonable for the payment processor to request information about the customer that the vendor is sending through, through a back-channel: if the vendor is convinced they’re serving an user in Canada, but the processor is receiving a credit card from Czechia, something smells fishy — and payments are all about risk management after all.

Breaking when using Tor is just as likely, but that can also be perceived as a feature, from the point of view of risk. But when the payment processor cannot understand what the vendor is talking about – because the vendor was talking to you over v6, and passed that information to a processor expecting v4 – you just get a headache, not risk management.

How did this become even more complicated? Well, at least in Europe a lot of payment processors had to implement additional verification through systems such as 3DSecure, Verified By Visa, and whatever Amex calls it. It’s often referred to as Strong Customer Authentication (SCA), and it’s a requirement of the 2nd Payment Service Directive (PSD2), but it has existed for a long while and I remember using it back before World IPv6 Day as well.

With SCA-based systems, a payment processor has pretty much no control on what their full set of dependencies is: each bank provides their own SCA backend, and to the best of my understanding (with the full disclosure that I never worked on payment processing systems), they all just talk to Visa and MasterCard, who then have a registry of which bank’s system to hit to provide further authentication — different banks do this differently, with some risk engine management behind that either approves straight away, or challenges the customer somehow. American Express, as you can imagine, simplifies their own life by being both the network and the issuer.

The Cloud is Vastly V4

This is probably the one place where I’m just as confused as some of the IPv6 enthusiast. Why do neither AWS nor Google Cloud provide IPv6 as an option, for virtual machines, to the best of my knowledge?

If you use “cloud native” solutions, at least on Google Cloud, you do get IPv6, so there’s that. And honestly if you’re going all the way to the cloud, it’s a good design to leverage the provided architecture. But there’s plenty of cases in which you can’t use, say, AppEngine to provide a non-HTTP transport, and having IPv6 available would increase the usability of the protocol.

Now this is interesting because other providers go different ways. Scaleway does indeed provide IPv6 by default (though, not in the best of ways in my experience). It’s actually cheaper to run on IPv6 only — and I guess that if you do use a CDN, you could ask them to provide you a dual-stack frontend while talking to them with an IPv6-only backend, which is very similar to some of the backend networks I have designed in the past, where containers (well before Docker) didn’t actually have IPv4 connectivity out, and they relied on a proxy to provide them with connections to the wide world.

Speaking of CDNs – which are probably not often considered part of Cloud Computing but I will bring them in anyway – I have mused before that it’s funny how a number of websites that use Akamai and other CDNs appear to not support IPv6, despite the fact that the CDNs themselves do provide IPv6-frontend services. I don’t know for sure this is not related to something “silly” such as pricing, but certainly there are more concerns to supporting IPv6 than just “flipping a switch” in the CDN configuration: as I wrote above, there’s definitely full-stack concern with receiving inbound connections coming via IPv6 — even if the service does not need full auditing logs of who’s doing what.

Privacy, Security, and IPv6

If I was to say “IPv6 is a security nightmare”, I’d probably get a divided room — I think there’s a lot of nuance needed to discuss privacy and security about IPv6.

Privacy

First of all, it’s obvious that IPv6 was designed and thought out at a different time than the present, and as such it brought with it some design choices that, looking at them nowadays, look wrong or even laughable. I don’t laugh at them, but I do point out that they were indeed made with a different idea of the world in mind, one that I don’t think is reasonable to keep pining for.

The idea that you can tie up an end-user IPv6 with the physical (MAC) address of an adapter is not something that you would come up with in 2021 — and indeed, IPv6 was retrofitted with at least two proposals for “privacy-preserving” address generation option. After all, the very idea of “fixed” MAC addresses appear to be on the way out — mobile devices started using random MAC addresses tied to specific WiFi networks, to reduce the likeliness of people being tracked between different networks (and thus different locations).

Given that IPv6 is being corrected, you may expect then that the privacy issue is now closed, but I don’t really believe that. The first problem is that there’s no real way to enforce what your equipment inside the network will do from the point of view of the network administrator. Let me try to build a strawman for this — but one that I think is fairly reasonable, as a threat model.

While not every small run manufacturer would go out of their way to be assigned an OUI to give their devices a “branded” MAC address – many of the big ones don’t even do that and leave the MAC provided by the chipset vendor – there’s a few of them who do. I know that we did that at one of my previous customers, where we decided to not only getting an OUI to use for setting the MAC addresses of our devices — but we also used it as serial number for the device itself. And I know we’re not alone.

If some small-run IoT device is shipped with a manufacturer-provided MAC address with their own OUI, it’s likely that the addresses themselves are predictable. They may not quite be sequential, and they probably won’t start from 00:00:01 (they didn’t in our case), but it might be well possible to figure out at least a partial set of addresses that the devices might use.

At that point, if these don’t use a privacy-preserving ephemeral IPv6, it shouldn’t be too hard to “scan” a network for the devices, by calculating the effective IPv6 on the same /64 network from a user request. This is simplified by the fact that, most of the time, ICMP6 is allowed through firewalls — because some of it is needed for operating IPv6 altogether, and way too often even I left stuff more open than I would have liked to. A smart gateway would be able to notice this kind of scans, but… I’m not sure how most routers do with things like this, still. (As it turns out, the default UniFi setup at least seems to configure this correctly.)

There’s another issue — even with privacy extensions, IPv6 addresses are often provided by ISPs in the form of /64 networks. These networks are usually static per subscriber, as if they were static public IPv4, which again is something a number of geeks would like to have… but has also side effects of being able to track a whole household in many cases.

This is possibly controversial with folks, because the move from static addresses to dynamic dialup addresses marks the advent of what some annoying greybeards refer to as Eternal September (if you use that term in the comments, be ready to be moderated away, by the way). But with dynamic addresses came some level of privacy. Of course the ISP could always figure out who was using a certain IP address at a given time, but websites wouldn’t be able to keep users tracked day after day.

Note that the dynamic addresses were not meant to address the need for privacy; it was just incidental. And the fact that you would change addresses often enough that the websites wouldn’t track you was also not by design — it was just expensive to stay connected for long period of times, and even on flat rates you may have needed the phone line to make calls, or you may just have lost connection because of… something. The game (eventually) changed with DSLs (and cable and other similar systems), as they didn’t hold the phone line busy and would be much more stable, and eventually the usage of always-on routers instead of “modems” connected to a single PC brought the whole cycling to a new address a rare occurrence.

Funnily enough, we once again some tidbit of privacy in this space with the advent of carrier-grade NATs (CGNAT), which were once again not designed at all for this. But since they concentrated multiple subscribers (sometimes entire neighbourhoods or towns!) into a single outbound IP address, they would make it harder to tell the person (or even the household) accessing a certain website — unless you are the ISP, clearly. This, by the way, is kind of the same principle that certain VPN providers use nowadays to sell their product as a privacy feature; don’t believe them.

This is not really something that the Internet was really designed — protection against tracking didn’t appear to be one of the worries of an academic network that considered the idea that each workstation would be directly accessible to others, and shared among different users. The world we live in, with user devices that are increasingly single-tenant, and that are not meant to be accessible by anyone else on the network, is different from what the original design of this network was visualizing. And IPv6 carried on with such a design to begin with.

Security

Now on the other hand there’s the problem with actual endpoint security. One of the big issues with network protocols is that firewalls are often designed with one, and only one protocol in mind. Instead of a strawman, this time let me tlak about an episode from my past.

Back when I was in high school, our systems lab was the only laboratory that had a mixed IP (v4, obviously, it was over 16 years ago!) and IPX network, since one of the topics that they were meant to teach us about was how to set up a NetWare network (it was a technical high school). All of the computers were set up with Windows 95, with the little security features that were possible to use, and included a software firewall (I think was ZoneAlarm, but my memory is fuzzy around this point). While I’m sure that trying to disable it or work it around would have worked just as well, most of us decided to not even try: Unreal Tournament and ZSNES worked over IPX just as well, and ZoneAlarm had no idea of what we were doing.

Now, you may want to point out that obviously you should make sure to secure your systems to work for both IP generations anyway. And you’d be generally right. But given that sometimes systems have been lifted and shifted many times, it might very well be that there’s no certainty that a legacy system (OS image, configuration, practice, network, whatever it is) can be safely deployed on a v6 world. If you’re forced to, you’ll probably invest money and time to make sure that it is the case — if you don’t have an absolute need beside the “it’s the Right Thing To Do”, you most likely will try to live as much as you can without it.

This is why I’m not surprised to hear that for many sysadmins out there, disabling IPv6 is part of the standard operating procedure of setting up a system, whether it is a workstation or a server. This is not even helped by the fact that on Linux it’s way easy to forget that ip6tables is different from iptables (and yes I know that this is hopefully changing soon).

Software and Hardware Support

Probably this is the aspect of the IPv6 fan base where I feel myself most at home with. For operating systems and hardware (particularly network hardware) to not support IPv6 in 2021 it feels like we’re being cheated. IPv6 is a great tool for backend networks, as you can avoid a lot of legacy features of IPv4 quickly, and use cascading delegation of prefixes in place of NAT (I’ve done this, multiple times) — so not supporting it at the very basic level is damaging.

Microsoft used to push for IPv6 with a lot of force: among other things, they included Miredo/Teredo in Windows XP (add that to the list of reasons why some professionals still look at IPv6 suspiciously). Unfortunately WSL2 (and as far as I understand HyperV) do not allow using IPv6 for the guests on a Windows 10 workstation. This has gotten in my way a couple of times, because I am otherwise used to just jump around my network with IPv6 addresses (at least before Tailscale).

Similarly, while UniFi works… acceptably well with IPv6, it is still considering it an afterthought, and they are not exactly your average home broadband router either. When even semi-professional network equipment manufacturers can’t get you a good experience out of the box, you do need to start asking yourself some questions.

Indeed, if I could have a v6-only network with NAT64, I might do that. I still believe it is useless and unrealistic, but since I actually do develop software in my spare time, I would like to have a way to test it. It’s the same reason why I own a number of IDN domains. But despite having a lot of options for 802.1x, VLAN segregation, and custom guest hotspots, there’s no trace of NAT64 or other goodies like that.

Indeed, the management software is pretty much only showing you IPv4 addresses for most things, and you need to dig deep to find the correct settings to even allow IPv6 on a network and set it up correctly. Part of the reason is likely that the clients have a lot more weight when it comes to address selection than in v4: while DHCPv6 is a thing, it’s not well supported (still not supported at all on WiFi on Android as far as I know — all thanks to another IPv6 “purist”), and the router advertising and network discovery protocols allow for passive autoconfiguration that, on paper, is so much nicer than the “central authority” of DHCP — but makes it harder to administer “centrally”.

iOS, and Apple products in general, appear to be fond of IPv6. More than Android for sure. But most of my IoT devices are still unable to work on an IPv6-only network. Even ESPHome, which is otherwise an astounding piece of work, does not appear to provide IPv6 endpoints — and I don’t know how much of that is because the hardware acceleration is limited to v4 structures, and how much of it is just because it’s possibly consuming more memory in such small embedded device. The same goes for CircuitPython when using the AirLift FeatherWing.

The folks who gave us the Internet of Things name sold the idea of every device in the world to be connected to the Internet through a unique IPv6 address. This is now a nightmare for many security professionals, a wet dream for certain geeks, but most of all an unrealistic situation that I don’t expect will become reality in my lifetime.

Big Names, Small Sites

As I said at the beginning explaining some of the thinking behind World IPv6 Day and World IPv6 Launch, a number of big names, including Facebook and Google, have put their weight behind IPv6 from early on. Indeed, Google keeps statistics of IPv6 usage with per-country split. Obviously these companies, as well as most of the CDNs, and a number of other big players such as Apple and Netflix, have had time, budget, and engineers to be able to deploy IPv6 far and wide.

But as I have ventured before, I don’t think that they are enough to make a compelling argument for IPv6 only networks. Even when the adoption of IPv6 in addition to IPv4 might make things more convenient for ISPs, the likeliness of being able to drop tout-court IPv4 compatibility is approximately zero, because the sites people actually need are not going to be available over v6 any time soon.

I’m aware of trackers (for example this one, but I remember seeing more of those) that tend to track the IPv6 deployment for “Alexa Top 500” (and similar league tables) websites. But most of the services that average people care about don’t seem to be usually covered by this.

The argument I made in the linked post boils down to this: the day to day of an average person is split between a handful of big name websites (YouTube, Facebook, Twitter), and a plethora of websites that are very sort of global. Irish household providers are never going to make any of the Alexa league tables — and the same is likely true for most other countries that are not China, India, or the United States.

Websites league tables are not usually tracking national services such as ISPs, energy suppliers, mobile providers, banks and other financial institutions, grocery stores, and transport companies. There are other lists that may be more representative here, such as Nielsen Website Ratings, but even those are targeted at selling ad space — and suppliers and banks are not usually interested in that at all.

So instead, I’ve built my own. It’s small, and it mostly only cares about the countries I experienced directly; it’s IPv6 in Real Life. I’ve tried listing a number of services I’m aware of, and should give a better idea of why I think the average person is still not using IPv6 at all, except for the big names we listed above.

There’s another problem with measuring this when resolving hosts (or even connecting to them — which I’m not actually doing in my case). While this easily covers the “storefront” of each service, many use separate additional hosts for accessing logged-in information, such as account data. I’m covering this by providing a list of “additional hosts” for each main service. But while I can notice where the browser is redirected, I would have to go through the whole network traffic to find all the indirect hosts that each site connects to.

Most services, including the big tech companies often have separate hosts that they use to process login requests and similar high-stake forms, rather than using their main domain. Or they may use a different domain for serving static content, maybe even from a CDN. It’s part and parcel of the fact that, for the longest time, we considered hostnames to be a security perimeter. It’s also a side effect of wanting to make it easier to run multiple applications written in widely different technologies — one of my past customers did exactly this using two TLDs: the marketing pages were on a dot-com domain, while the login to the actual application would be on the dot-net one.

Because of this “duality”, and the fact that I’m not really a customer of most of the services I’m tracking, I decided to just look at the “main” domain for them. I guess I could try to aim higher and collect a number of “service domains”, but that would be a point of diminishing returns. I’m going to assume that if the main website (which is likely simpler, or at least with fewer dependencies) does not support IPv6, their service domains don’t, either.

You may noticed that in some cases, smaller companies and groups appear to have better IPv6 deployments. This is not surprising: not only you can audit smaller codebases much faster than the extensive web of dependencies of big companies’ applications — but also the reality of many small businesses is that the system and network administrators do get a bit more time to learn and apply skills, rather than having to follow through a stream of tickets from everyone in the organization that is trying to deploy something, or has a flaky VPN connection.

It also makes it easier for smaller groups to philosophically think of “what’s the Right Thing To Do” versus the medium-to-big company reality of “What does the business get out of spending time and energy on deploying this?” To be fair, it looks like Apple’s IPv6 requirements might have pushed some of the right buttons for that — except for the part where they are not really requiring the services in use by the app to be available on IPv6, it’s acceptable for the app to connect with NAT64 and similar gateways.

Conclusions

I know people paint me as a naysayer — and sometimes I do feel like one. I fear that IPv6 is not going to become the norm during my lifetime, definitely not my career. It is the norm to me, because working for big companies you do end up working with IPv6 anyway. But most users won’t have to care for much longer.

What I want to point out to the IPv6 enthusiast out there is that the road to adoption is harsh, and it won’t get better any time soon. Unless some killer application of IPv6 comes out, where supporting v4 is no longer an option, most smaller players won’t bother. It’s a cost to them, not an advantage.

The performance concerns of YouTube, Netflix, or Facebook will not apply to your average European bank. The annoyance of going through CGNAT that Tailscale experience is not going to be a problem for your smart lightbulb manufacturer who just uses MQTT.

Just saying “It’s the Right Thing To Do” is not going to make it happen. While I applaud those who are actually taking the time to build IPv6-compatible software and hardware, and I think that we actually need more of them taking the pragmatic view of “if not now, when?”, this is going to be a cost. And one that for the most part is not going to benefit the average person in the medium term.

I would like to be wrong, honestly. I would want to wish that next year I’ll magically get firmware updates for everything I have at home and be working with IPv6 — but I don’t think I will. And I don’t think I’ll replace everything I own just because we ran out of address space in IPv4. It would be a horrible waste, to begin with, in the literal sense. The last thing we want, is to tell people to throw away anything that does not speak IPv6, as it would just pile up as e-waste.

Instead I wish that more IPv6 enthusiasts would get to carry the torch of IPv6 while understanding that we’ll live with IPv4 for probably the rest of our lives.

Home Automation: Physical Contact

In the previous post on the matter, I described the setup of lighting solutions that we use in the current apartment, as well as the previous apartment, and my mother’s house. Today I want to get into a little bit more detail of how we manage to use all of these lights without relying solely on our phones, or on the voice controlled assistants.

First of all, yes, we do have both Google Assistant and Alexa at home. I only keep Assistant in the office because it’s easiest to disable the mic on it with the physical switch, but otherwise we like the convenience of asking Alexa for the weather, or let Google read Sarah Millican for us. To make things easier to integrate, we also signed up for Nabu Casa to integrate our Home Assistant automations with them.

While this works fairly decently for most default cases, sometimes you don’t want to talk, for instance because your partner is asleep or dozing off, and you still want to control the lights (or anything else) without talking. The phone (with the Home Assistant app) is a good option, but it is often inconvenient, particularly if you’re going around the flat with pocketless clothes.

As it turns out, one of the good things that smart lights and in general IoT home automation bring to the table, is the ability to add buttons, which usually do not need to be wired into anything, and that can be placed just about anywhere. These buttons also generally support more than one action connected to them (such as tap, double-tap, and hold), which should allow providing multiple controls at a single position more easily. But there are many options for buttons, and they are not generally compatible with each other, and I got myself confused for a long while.

So to save the day, Luke suggested me some time ago to look into Flic smart buttons, which were actually quite the godsend for us, particularly before we had a Home Assistant set up at all. The way these work is that they are Bluetooth LE devices, that can pair with a proprietary Bluetooth LR “hub” (or with your phone). The hub can either connect with a bunch of network services, or work with a variety of local network devices, as well as send arbitrary HTTP requests if you configure it to.

While Flics were our first foray into adding physical control to our home automation, I’m not entirely sure if I would recommend them now. While they are quite flexible at a first glance, they are less than stellar in a more complex environment. For instance, while the Flic Hub can talk directly to LIFX lights on local network (awesome, no Internet dependency), it doesn’t have as much control on the results of that: when we used the four LIFX spots in the previous flat’s office, local control was unusable, as nearly every other click would be missing one spot, making it nearly impossible to synchronise. Thankfully, LIFX is also available as a “Cloud” integration, that could handle the four lights just fine.

The Flic Hub can talk to a Hue Bridge as well, to toggle lights and select scenes, but this is still not as well integrated as an original Hue Smart Button: the latter can be configured to cycle between scenes on multiple taps, while I could only find ways to turn the light on or off, or to select one scene per action with the default Flic interface.

We also wanted to use Flic buttons to control some of the Home Assistant interactions. While buttons on the app are comfortable, and usually Google Assistant can understand when we say “to the bedroom”, there are other times when we could rather use a faster response than the full Google Assistant round-trip. Unfortunately this is an area where Flic leaves a lot to be desired.

First of all, the Flic Hub does not support IPv6 (surprise), which meant I can’t just point it at my Home Assistant hostname, I need to use the internal IPv4 address. Because of that, it also cannot validate the TLS certificate either. Second, Flic does not have an Home Assistant native integration: for both Hue and LIFX, you can configure the Hub against a Cloud account or the local bridge, then configure actions to act on the devices connected to them, but for Home Assistant there is nothing, so the options are limited to setting up manual HTTP requests.

This is where things get fairly annoying. You can either issue a bearer token to login to Home Assistant, in which case you can configure the Flic to execute a script directly, or you can use the webhook trigger to send the action to Home Assistant and handle it there. The former appears to be slightly more reliable in my experience, although I have not figured out if it’s the webhook request not being sent by the hook, or the fact that HA is taking time to execute the automations attached to it; I should spend more time debugging that, but I have not had the time. Using the Bearer Tokens is cumbersome, though. Part of the problem is that the token itself is an extremely long HTTP header, and while you can add custom headers to requests in the Flic app, the length of this header means you can’t copy it, or even remove it. If you need to replace the token you need to forget the integration with the HTTP request and create a new one altogether.

Now on the bright side, Flic has recently announced (by email, but not on their blog) that they launched a Software Development Kit that allows writing custom integrations with the Flic Hub. I have not looked deeply into it, because I have found other solutions that work for me to augment my current redundant Flics, but I would hope that it means we will have a better integration with Home Assistant one day in the future.

To explain what the better alternatives we’re using are, we need to point out the obvious one first: the native Hue smart buttons. As I said in the previous post, I did get them for my mother, so that the lights on the staircase turn on and off the same way as they did before I fixed the wiring. We considered getting them here, but it turns out that those buttons are not cheap. Indeed, the main difference between different buttons we have considered (or tried, or at are using) is to be found in the price. At the time of writing, the Hue buttons go for around £30, Flics for £20, Aqara (Xiaomi) buttons for around £8 and Ikea ones for £6 (allegedly).

So why not using cheaper options? Well, the problem is that all of these (on paper) require different bridges. Hue, Aqara and Ikea are all ZigBee based, but they don’t interoperate. They also have different specs and availability. The Aqara buttons can be easily ordered from AliExpress and they are significantly cheaper than the equivalent from Hue, but they are also bigger, and of a shape just strange enough to make it awkward to place next to the wallplates with the original switches of the apartment, unlike both Flic and Hue. The Ikea ones are the cheapest, but unless you have a chance to pop in their store, it seems like they won’t be shipping in much of a hurry. As I write this blog post, it’s been nearly three weeks that I ordered them and they still have not shipped, with the original estimate for delivery of just over a month — updated before even posting this: Ikea (UK) cancelled my order and the buttons are no longer available online, which meant I also didn’t get the new lights that I was waiting for. Will update if any new model becomes available. In the meantime I checked the instructions and it looks like these buttons only support a single tap action.

This is where things get more interesting thanks to Home Assistant. Electrolama sells a ZigBee stick that is compatible with Home Assistant and that can easily integrate with pretty much any ZigBee device, including the Philips Hue lights and the Aqara buttons. And even the Aqara supports tap, double-tap, and hold in the same fashion as Flic, but with a lot less delay and no lost event (again, in my experience). It turned out that at the end of the day for us the answer is to use cheaper buttons from AliExpress and configure those, rather than dealing with Flic, though at the moment we have not removed the Flics around the apartment at all, and we rather have decided to use them for slightly different purposes, for automation that can take a little bit more time to operate.

Indeed, the latency is the biggest problem of using Flic with Home Assistant right now: even when event is not going lost, it can sometimes take a few seconds before the event is fully processed, and in that time, you probably would have gotten annoyed enough that you would have asked a voice assistant, which sometimes causes the tap to be registered after the voice request, turning the light back off. Whereas, the Aqara button is pretty much instantaneous. I’m not entirely sure what’s going on there, it feels like the bridge is “asleep” and can’t send the request fast enough for Home Assistant to register it.

It is very likely that we would be replacing at least a couple of the Flics we already have set up with the Aqara buttons, when they arrive. They support the same tap/double-tap/hold patterns as the Flic, but are significantly lower latency. Although they are bigger, and they do seem to have very cheap brittle plastic, I nearly made it impossible to change the battery on my first one, because trying to open the compartment with a 20p coin completely flattened the back!

Once you have working triggers with ZigBee buttons, by the way, connecting more controls become definitely easier. I really would consider making a “ZigBee streamerdeck” to select the right inputs on the TV, to be honest. Right now we have one Flic to double-tap to turn on the Portal (useful in case one of our mothers is calling), and another one to select PS4 (tap), Switch (double-tap), or Kodi (hold).

Wiring automation, and selection of specific scenes, is the easiest thing you can do in Home Assistant, so you get a lot of power for a little investment in time, from my point of view. I’m really happy to have finally set it up just the way I want it. Although it’s now time to consider updating the setup to no longer assume that either of us is always at home at any time. You know, with events happening again, and the lockdown end in sight.

What’s Up With CircuitPython in 2021?

Last year, as I started going through some more electronics projects, both artsy and not, I have lauded the ease that Circuit Python adds to building firmware for sometimes not too simple devices, particularly when combined with Adafruit’s Feather line up of boards with a fixed interchangable interface.

As I noted last month, though, it does feel like the situation in the last year didn’t improve, but rather become more messy and harder to use. Let me dig a bit more into what I think is the problem.

When I worked on my software defined remote control, I stopped at a checkpoint that was usable, but a bit clunky. In this setup, I was using an Feather nRF52840, because the M0 alternatives didn’t work out for me, but I was also restarting it before each command was sent, because of… issues.

Indeed, I found my use case turned out to be fairly niche. On the M0, the problem was that the carrier wave signal used to send the signal to the Bravia TV was too off in terms of duty cycle, which I assume was caused by the underpowered CPU running the timers. On the nRF52840 the problem was instead that, if I started both transmitters (for the Bravia and for the HDMI switch), one of them would be stuck high.

So what I was setting up the stream to do (which ended up not working due to muEditor acting up), was to test running the same code on three more feathers: the Feather RP2040, the FeatherS2, and the Feather M4. These are quite more powerful than the others, and I was hoping to find that they would work much better for my use case — but turned out to tell me something entirely different.

The first problem is that the Circuit Python interfaces are, as always, changing. And they change depending on the device. This hit me before with the Trinket M0 not implementing time the same way as the Feather M0, but in this case things changed quite a bit. Which meant that I couldn’t use the same code I was using on the nRF Feather on the two newest Feathers: it worked fine on the M4, it needed some tweaking for the FeatherS2, and couldn’t work at all on the RP2040.

Let’s start with the RP2040, the same way I wanted to start the stream. Turns out the RP2040 does not have PulseOut support at all, and this is one of the problems with the current API selection in Circuit Python: what you see before you try and look into the details, is that the module (pulseio) is present, but if you were to break down the features supported inside, you would find that sending a train of pulses with that module is not supported on the RP2040 (at least as of Circuit Python 6.2.0).

Now, the good part is that RP2040 supports Programmable I/O (PIO), which would make things a lot more interesting, because the pulsing out could be implemented in the “pioasm” language, most likely. But that requires also figuring out a lot more than I originally planned on doing: I thought I would at least reuse the standard pulsing out for the SIRC transmitter, as that does need a 40 kHz carrier, and the harder problem was to have a non-carried signal for the switch (which I feed directly into the output of the IR decoder).

So instead I dropped off to the next in the list, the FeatherS2, which I was interested in because it also supports WiFi natively, instead of through the AirLift. And that was the even more annoying discovery: the PulseOut implementation for the ESP-S2 port doesn’t allow using a PWM carrier object, it needs the parameters instead. This is not documented — if you do use the PulseOut interface as documented, you’re presented with an exception stating Port does not accept PWM carrier. Pass a pin, frequency and duty cycle instead. At the time of writing that particular string only has two hits on Google, both coming from a Stanford’s course material that included the Circuit Python translation files, so hopefully this will soon be the next hit on the topic.

Unfortunately there is also no documented way to detect whether you need to pass the PWM or the set of raw parameters, which I solved in pysirc by checking for the platform the code is running in:

_PULSEOUT_NO_CARRIER_PLATFORMS = {"Espressif ESP32-S2"}
if sys.platform in _PULSEOUT_NO_CARRIER_PLATFORMS:
    pulseout = pulseio.PulseOut(
        pin=pin, frequency=carrier_frequency, duty_cycle=duty_cycle
    )
else:
    pwm = pulseio.PWMOut(
        pin, frequency=carrier_frequency, duty_cycle=duty_cycle
    )
    pulseout = pulseio.PulseOut(pwm)

Since this is not documented, there is also no guarantee that this isn’t going to be changing in the future, which is fairly annoying to me, but is not a big deal. But it also is not the only place where you end up having to rely on sys.platform for the FeatherS2. The other place is the pin names.

You see, one of the advantages of using Feathers for rapid prototyping, to me, was the ability to draw a PCB for a Feather footprint, and know that, as long as I checked that the pins were not being used by the combination of wings I needed (for instance the AirLift that adds WiFi support forces you to avoid a few pins, because it uses them to communicate), I would be able to take the same Circuit Python code and run it with any of the other feathers without worry.

But this is not the case for the FeatherS2: while obviously the ESP32 does not have the same set of GPIO as, say, the RP2040, I would have expected that the board pin definition would stay the same — not so. Indeed, while my code originally used board.D5 and board.D6 for the outputs, in the case of the FeatherS2 I had to use board.D20 and board.D21 to make use of the same PCB. And I had to do that by checking the sys.platform, because nothing else was telling.

The worst part of that? These pin definitions conflict between the FeatherS2 and the Feather M4. board.D20 is an on-board components, while board.D21 is SDA. On the FeatherS2, SDA is board.D10, and the whole right side of the pins is scrambled, which can cause significant conflicts and annoying debugging if using the same code in the same PCB with M4 versus S2.

This is what made me really worried about the direction of Circuit Python — when I looked at it last year it was a very good HAL (Hardware Abstraction Layer) that allowed you to write code that can work in multiple boards. And while some of the features obviously wouldn’t be available (such as good floating-point on the M0), those that would be were following the same API. Nowadays? Every port appears to have a slight different API, and different devices with the same form factor need different code. Which is kind of understandable, as the amount of supported boards increased significantly, but it still goes against the way I was hoping it would go.

Now, if the FeatherS2 worked for my needs, that would have been good enough. After all, I adapted pysirc to work with the modified API, so I thought I would be ready. But then something else caught my attention: as I said above, the signal sent to the switch doesn’t need to be modulated on top of a carrier wave. Indeed, if I did do that, the switch wouldn’t recognize the commands. So for that I would like a 100% duty cycle, but the FeatherS2 doesn’t have that as an option. Nor does it seem to be able to set a 99% cycle either. So I found myself looking at a very dirty signal compared to what I wanted.

Image
What was supposed to be a continuous 9ms pulse on Saleae’s Logic software.

Basically, the FeatherS2 is not fit for purpose when it comes to the problem I needed solving. So at the end of the day, I ended up just going back to what is in my opinion the best implementation for the form factor: the Feather M4. This worked fine for sending the two signals without needing to restart the Feather (unlike on the nRF), which shortened the time needed to send messages to TV and switch significantly (yay!) even though it still has a bit of a “dirty” pulse signal when running a 99% duty cycle at 10 Hz:

Image
A slightly noisy 9ms pulse followed by a train of noisy pulses.

But I’m still upset, because I feel Circuit Python is becoming less reliable on the “Write Once, Run Anywhere” suggestion that it was giving last year: you really need to make the code tailored for a particular implementation, instead of being able to rely on the already existing libraries.

Update 2021-05-26

Scott Shawcroft from Circuit Python got in touch as a follow up from this post, and it looks like some of the issues I reported here were either already on the radar (PulseOut for rp2040), re-opened following this post (PulseOut for ESP32-S2), or have now been addressed (Feather pin names). A huge thanks to Scott and the Circuit Python team all!

You can also see here that Scott is well aware of the importance of this all to fit together nicely — so hopefully the FeatherS2 is going to be a one-off special case!

Light Nerdery: Evolving Solutions At Homes

Of all topics, I find myself surprised that I can be considered a bit of a lights nerd. Sure, not as much as Technology Connections, but then again, who is nerdier than him on this? The lightbulb moment (see what I’m doing?) was while I was preparing to write the post that is now going to be in the future, and realized that a lot of what I was about to write needed some references about my experiments with early LED lighting, and what do you know? I wrote about it in 2008! And after all, it was a post about new mirror lights that nearly ended up the last post ever of mine, being posted just a few hours before my trip to the ICU for the pancreatitis.

Basically this is going to be a “context post” or “setup post”: it likely won’t have a call to action, but it will go into explaining the details of why I made certain tradeoffs, and maybe, if you’re interested in making similar changes to your home, it can give you a bit of an idea as well.

To set the scene up, let me describe the three locations that I’ll be describing light setups in. The first is my mother’s house in Venice mainland, which is an ’80s built semi-detached on two floors. The second is the flat in London, when my now wife moved in. And the third one is the current flat we moved into together. You may notice a hole in this setup: Dublin. Despite having lived there for longer than in London, I don’t have anything to talk about when it comes to light setup, or even in general about home; looking back, it sounds like I have actually spent my time in Dublin taking it nearly as a temporary hotel room, and spent very little time in “making it mine”.

A House Of Horrible Wiring

In that 2008 post I linked above, I complained how the first LED lights I bought and set up in my bedroom would keep glowing, albeit much dimmed, when turned off. Thanks to the post and discussions had with a number of people with more clue than me at the time, I did eventually find the reason: like in many other places throughout the house, deviators were used to allow turning the light on and off in different places. In the particular case of my bedroom, a switch by the door was paired with another on a cord, to be left by the bed. But instead of interrupting the live of the mains, they were interrupting the neutral which meant that, turning the light “off” still allowed enough current to go through between live and ground that the LEDs would stay on.

This turned out to be a much, much bigger deal than just a simple matter of LED lights staying on. Interrupting the neutral is not up to regulation: you may still have phase and get shocked if touching the inside of the lampholder, even with the switch turned off, but most importantly, it looks like it causes a number of CFLs electronics to be “stressed”, with their lifespan being seriously shortened by that. This was actually something that we have noticed and complained for many years, but never realised it was connected.

Eventually, I fixed the wiring in my bedroom (and removed the deviator I didn’t quite need), but I also found another wiring disaster. The stairwell at my mother’s had two light points, one on the landing, and one on the top of the stairs, and they were controlled together by switches at the top and bottom of the staircase. As expected, these interrupted the neutral, but most importantly, each position was wired into the live of the floor it was closest to, ignoring the separation of the circuits and passing through a phase even when turning the circuit of. And that explained why no bulb ever lasted more than a year for those.

Unfortunately, fixing the deviators there turned out to be pretty much impossible, due to the way the wiring conducts were set down inside the walls. So instead, I had to make do with separating the two lights, which was not great, but was acceptable while I was still living with my mother: I would turn on the light upstairs for her (I was usually upstairs working) and she would not need the light downstairs. But I had to come up with a solution after I prepared to leave.

The first solution to this was adding a PIR (Passive Infra Red) movement sensor at the bottom of the stairs to turn on the light on the landing. That meant that just taking the first step of the stairs would illuminate enough of the staircase that you could make your way up, and then the timer would turn it off. This worked well enough for a while, to the point that even our cat learned to use it (you could observe her taking a step, wait for the light, then run all the way upstairs).

Then, when I was visiting a couple of years back, I noticed that something wasn’t quite working right with the sensor, so I instead ordered a Philips Hue lightbulb, one of those with BLE in addition to ZigBee, so that it wouldn’t require a bridge. At that point my mother could turn the light on and off with her phone, so that the sensor wasn’t needed anymore (and I removed it from the circuit).

This worked much better for her, but she complained earlier this year that she kept forgetting to turn off the light before she would get to bed, and then she’d have to go back to the staircase as her phone couldn’t reach the light otherwise. What would be a minor inconvenience for me and many of the people reading this, for my mother was a major annoyance, as she starts getting old, so I solved it by getting her a Hue Bridge and a couple of Smart Buttons: the bridge alone meant that her phone didn’t need to reach the light (the bridge would be reachable over WiFi), but the buttons restored a semblance of normality about turning the light on and off as she used to before I re-wired the lights.

This is something that Alec from Technology Connections pointed out on Twitter some time ago: smart lights are, at the very least, a way to address bad placement of lights and switches. Indeed, over ten years later, my mother now has normal-acting staircase lights that do not burn out every six months. Thank you, Internet of Things!

While we won’t be visiting my mother any time soon, due to the current pandemic and the fact she has not been called in for the vaccine yet, once we do I’ll also be replacing the lightbulbs in my bedroom. My office there is now mostly storage, sadly but we have been staying in my old bedroom that still has the same lights I wrote about in 2008. A testament to them lasting much longer now that the wiring is right, given that for the most part I don’t remember spending more than a few months without needing to replace a lightbulb somewhere, when I was living there.

The 2008 lights I chose to keep the light to a minimum before going to bed, which I still think is a good idea, but they do make it hard to clean and sort things out. Back then I was toying with the idea of building a controller to turn on only part of the lights, but nowadays the available answer is to just use smart lights and configure them with separate scenes for bedtime and cleantime. And buttons and apps mean that there is no need anymore for having a separate bedside lamp, for the most part.

Sharing A Flat Planned For One

When I moved to London, i went looking for an apartment that, as I suggested to the relocation aide, would be “soulless” — I had heard all of the jokes about real estate agents defining flats falling apart as having “characters” or big design issues considered “the soul of the building”, so I wanted to make clear I was looking mostly for a modern flat, and something that I would be able to just tweak to my own liking.

I also was looking for an apartment that was meant to be mine, with no plan on sharing. Then I met my (now) wife, and that plan needed some adjustments. One of the things that became obvious early on was that the bedroom lights were not really handy. Like pretty much all of the apartments I lived in outside of Italy, that one had GU-10 spotlight holders scattered throughout the ceiling, and a single switch for them by the door. This worked okay for me alone, as I would just turn the light off and then lay on the bed, but it becomes an awkward dance of elbows when you share a fairly cozy mattress.

So early on I decided to get an IKEA Not floorlamp, and a smart light. I settled for the LIFX Mini, because it supported colours (and I thought it would be cool), didn’t need a bridge, and also worked without Internet connection over the local network. The only fault in my plan was to have gotten a Not with two arms at first, which meant we have a few times turned the wrong light off, until I Sugru’d away the main switch on the cable.

I said “at first” there, because we then realised that this type of light is great not only in the bedroom but also in the living room. When watching TV keeping the light on would be annoying (the spots were very bright), but turning it entirely off would be tiring for the eyes. So we got a second Not lamp, and a second LIFX bulb, and reshuffled them a bit so that the two-arms one moved to the living room, as there the additional spotlight is sometimes useful).

This worked very nicely for the most part, to the point that I considered whether my home office could use some of that. This was back in the beforetimes, where the office (i.e. the second bedroom) would mostly be used to play games in the evening, by my wife when not using the laptop, or in the rare times I would be working from home (Google wouldn’t approve). That meant that in many cases having the full light would also be annoying, and in other cases having very dim light would also not work well. In particular, for a while, I was keeping company to my wife while she played by reading, and I would have wanted light on the side of my seat, but not as much on her side.

Since the office only had four spots, we decided to buy four LIFX GU-10 lights. I’m not linking to those because the price seems to have gone off the charts and they now cost more than twice what we paid for them! These are full-fledged LIFX bulbs, with WiFi and the whole colour range. Lovely devices, but also fairly bulky and I’ll get back to that later on.

Overall, this type of selective smart lighting helped significantly with the enjoyment of the flat, by reducing the small annoyances that would be presenting on a regular basis, like navigating the bedroom in the dark, or having the light too bright to watch TV. So we were looking towards replicating that after moving.

No Plan Survives Contact With The New Flat

When we moved to the new flat we knew that a number of things would have had to be changed. For instance, the new office had six rather than four spots, and the rooms layout meant that some of the choice of where to put the lamps would be not as good as we had previously.

Things got even a little more complicated than that: the LIFX GU-10 bulbs are significantly bigger than your average GU-10 spots, so they didn’t actually fit at all in the receptacle that this flat used! That meant that we couldn’t get them in, even if we decided to only keep four out of the six.

It’s in this situation that we decided to bite the bullet and order a Philips Hue Bridge for our flat, together with six White Ambiance spots: these spots are the same size as normal GU-10 spots, and do not have issues with fitting in the receptacle, so they could be used in the new office. While there is a model supporting colour changes as well as white temperature balance, in the office we never really used colour lighting enough to justify the difference in price (though we did and still do rely on the colour temperature selection).

Once you have a bridge, adding more lights becomes cheaper than buying a standalone light, too. So we also added a “reading light” to the bedroom, which is mostly for me to use if I can’t sleep and I don’t want to wake up my wife.

Another thing we would have liked to do was to replace the clicky switch for the ensuite with a soft/smart button. The reason for that was twofold: the first is that it took us months to get used to the placement of the switch in this flat, the second is that the switch is so damn noisy when clicking that it kept waking the other up when one of us went to use the toilet overnight. Unfortunately changing the switch appears not trivial: with our landlord permission we checked the switch connection, and it is not wired with the correct L/N positions or colours, and I could see a phase on three out of the four positions.

Instead, when we could confirm that the switch did not control the extraction fan, we decided to put on three Hue spots in the bathroom, this time not the temperature controlled once, but just the dimmer ones. And at that point, we could keep a single one at 1% overnight, and not need to turn anything on or off when using the restroom during the night: it gets very dim, so you don’t wake up, but you can still see plenty to use the toilet and wash your hands. This by the way was an idea that came to me after watching some of BigClive videos: the very low level light makes for a good light to have overnight to avoid waking youself up.

To explain just how useful this setup ended up being for us, the ensuite has three spots: one is pretty much in the shower box, the other two are by the door and in the middle. Overnight, we only leave the shower spot running at 1%, with the other two being off. If we need more light, for instance to brush our teeth, or for me to get my insulin, we have an “on” scene in which all three spots are at around 30%. When we’re taking a shower, we only turn on the shower spot to 100%. And when we’re cleaning the bathroom, we set all three to 100%. At maximum light, the new bulbs are brighter than the old ones were; in the “on” scene we use, they are much less bright, because we don’t really need that much light on all the time.

We also have ordered more spots, this time from IKEA, that sells an even cheaper model, although with not as good low-light performance. We could do this because I’ve recently replaced the Hue Bridge for a ZigBee stick on our Home Assistant, and I’ll go into more details about that in a future post. At the time I’m writing this the spots have not arrived yet, but we decided that, now that we’re more likely going to be going out again (we both got our first dose of vaccine, and soon to receive the second), it makes sense to have a bit more control of the light on the entrance.

In particular, the way the entrance is currently set up, we turn on six spots all the time in the T-shaped hallway. When coming back during the day, only one spot would be necessary, to take off shoes and put down keys, the rest of the flat is very bright and does not need illumination. And similarly when just going back and forth during the evening, only the two or three spots illuminating the top of the T would be needed, while the ones at the door are just waste. Again smart lights in this case are helpful to shape inflexible wiring.

Conclusion

I wrote before that I get annoyed at those who think IoT is a waste of time. You can find the IoT naming inane, you can find the whole idea of “smart” lights ridiculous, but however you label it, the ability to spread the control of lightbulbs further than the origin wiring intended is a quality of life improvement.

Indeed, in the case of my mother’s house, it’s likely that the way we’ll solve the remaining problems with wiring will be by replacing most switches with smart lights and floorlamps.

And while I have personally some questions about keeping the “off” lightbulbs connected to something, a quick back of the envelope calculation I did some months ago shows that even just the optimisation of being able to automate turning on and off lights based on presence, or the ability to run the light most of the time at lower power, can easily reduce the yearly energy usage considerably (although not quite to the level that buying all new bulbs would save you money, if you already have LED lights).

So once again, and probably not for the last time, I want to tip off my hat to Home Assistant, Electrolama, and all the other FLOSS projects that work with the wishes of people, rather than against them, to empower them to have smart home technologies they can control.

What To Look For In A Glucometer

I received a question some time ago about which glucometer I would recommend, if any, and I thought I would put down some notes about this, since I do have opinions, but I also know full well that I’m not the right person to recommend medical devices in the first place.

So just we’re clear, I’m not a medical professional, and I can only suggest as for what to look for when choosing a blood sugar meter device. If your GP or diabetologist is recommending one in particular over others, I would recommend you follow their suggestion over mine.

So first of all, what are we talking about? I’m going to be focusing on blood glucometers exclusively, because the choice within CGMs (Continuous Glucose Meters) and “flash” meter solutions (such as the FreeStyle Libre) are more limited, and I have even less experience on those. I also found out of personal experience, together with talking with a number of other CGM users, that those tend to be a lot more temperamental and personal choices.

Blood glucometers, as the name implies, work by measuring the sugar in the blood by putting a drop of blood (usually taken by pricking a finger) onto a chemically reactive “strip” (with the exception of one device to my knowledge). I have over the years “reviewed” a number of meters on this blog, simply because I end up getting my hands on them out of hobby nowadays, to reverse engineer and implement support (when feasible) in my open source tooling.

Let me repeat: I do not have the technical expertise to judge the clinical effectiveness of glucometers. I do know that most of them have a wide margin on their readings, to the point that you may have noticed me annoyed at the difference readings on a recent stream. Most meters provide details about their accuracy in the paperwork, and they assert you can use certain calibration solution to verify that your particular device satisfied that calibration. Unfortunately when I have been looking at this in the past I couldn’t figure out whether there is an universal solution that could be used to compare the readings of the same exact concentration across different meters.

So, if I wasn’t using the Libre, what glucometer would I be using, and why? Most likely I’d still be using the Accu-Chek Mobile. As far as I am aware it’s still the only model of meter that uses a cartridge system rather than strips, and when going out for dinner or coffee, or being in the office, it’s nice to have the option to just check your blood sugar without having to worry about getting blood all over the place, or having to find where to throw the now-used strip. While the device is not the smallest glucometer out there, the carrying case does still make for a much more compact solution than most of the alternatives used, and the USB “thumbdrive” with data and graphs make it very easy to access with most devices. I have not tried the Bluetooth integration kit, though I did order one, but I guess that was needed for the increasing amount of people who do not use a “computer” daily, but does have access to a smartphone.

But this is just my choice, obviously. If you live in a country that does not provide to you the strips for free (or you don’t have an official diagnosis for which they would provide them for free), then the cost of the supplies is likely the main significant factor. Many of the manufacturers appear to have taken to the “razor and blades” approach of giving out the meters for free, or nearly free, but charging you (or your healthcare system) heavily for the strips. So it might be worth looking at the price of strips in your country to figure out on the long term what’s the cost of using a certain meter or another.

This is my best guess on why people appear to be finding my reviews of Chinese glucometers: to the best of my understanding there’s a number of countries, including Russia, where meters and strips are paid out of pocket, and so people turn to AliExpress, because there’s enough supply — most Chinese meters appear to use the same strips, and sellers undercut each other all the time, particularly when the strips are about to expire.

If price and availability are not an issue, and neither is the cartridge vs strips, then it continues down the road of features. Is the meter going to be used by an older person with eyesight issue? Look for a very big display. Is it going to be used by someone who has trouble with at a glance estimation of what is okay and isn’t (well noting that this category is pretty much transversal to age groups and education)? Look for a colour display that includes Green/Yellow/Red indicators, possibly one where the thresholds are configurable.

Some of the features also depend on what your doctors’ take on technology is. My diabetologist in Dublin didn’t have any diabetes management system for me to upload readings to, so I settled with running the exports with whichever software and sending it over as a PDF, while my new support team here in London uses Abbott’s LibreView. Am I completely comfortable turning to a cloud solution? No. But in the grand scheme of things it’s a tradeoff that works well for me, particularly after a year of Covid-19 pandemic, during which showing up at the hospital just to hand off my meter to be downloaded into the system would not have been a fun experience.

So if your medical team has set up a specific software for you to upload your data to, you probably want to choose a compatible meter. And that might mean either one that has PC connectivity so you can download it with a specific client, or one that has Bluetooth connectivity so that you can download it with your phone. With additional complications for macOS users and pretty much zero support for Linux users outside of devices supported by my glucometerutils, Tidepool or other similar solutions.

Different software also has different “analytics” of readings, with averages before and after meals and bucketed by time of day. Honestly, I don’t think I ever had enough useful information for a blood meter to build significant statistics out of it, but if that’s your cup of tea, that might be a good feature to choose your meter from (as long as you’re not using glucometerutils in which case just get any meter and build the analytics out of it).

Again depending heavily on who’s going to be using it, it’s important to take into consideration the physical size and a few of the practicalities of using a blood meter. Smaller meters work great if you have small hands, but they would be too fiddly to operate for someone with large, not nimbly hands. That’s why a lot of the models aimed at older people (with sound, large display, etc) are often designed to be big and with large and mushy buttons, rather than small and clicky. The same goes for strips: as I noted on the GlucoMen Areo review, Menarini did a very nice job with fairly large strips that are easier to handle, compared to, say, the tiny strips used by FreeStyle or OneTouch. But even with tiny strips, the meter can help with making it easier to handle; both of the Chinese meters I have reviewed have a lever to eject the strip directly into the trash, rather than having to take it out with your fingers — while I suspect this may just be cultural, it’s definitely a useful feature to have for those who are squeamish about handling blooded strips.

I would say that it’s pretty much impossible to have a meter fit all of the best characteristics, because a lot of those are subjective: I have nimble fingers, good numeracy, and a few reserves with sharing my data with unknown cloud providers, but with a medical team that does indeed use diabetes management systems. So if I had to be looking for a new meter (rather than the Libre) right now, I would probably be looking for a compact meter, that can be downloaded either with an application that exports directly to my doctors’, or with one that can generate a file I can email them, and the Accu-Chek still fits the bill: it does not have colourful display to tell me whether something is in range or not, and its buttons are clicky and not too wide, but it’s a tradeoff that works for me.

This should also probably explain why I talk about the stuff I talk about when I write my glucometers reviews: it’s all about how the device feels, what features it has, and how well it works to do what you want. Some of the models are more intuitive than others, and some have tradeoffs that don’t work for me, but I can see where they came from. I cannot compare the accuracy, since I don’t have the training to do so, but I can compare the rest of the features, and that’s what I focus on: it’s what most people will do anyway.

Rose Tinted Glasses: On Old Computers and Programming

The original version of this blog post was going to be significantly harder to digest and it actually was much more of a rant than a blog post. I decided to discard that, and try to focus on the positives, although please believe me when I say that I’m not particularly happy with what I see around me, and sometimes it takes strength not to add to the annoying amount of negativity out there.

In the second year of Coronavirus pandemic, I (and probably a lot more people) have turned to YouTube content more than ever, just to keep myself entertained in lieu of having actual office mates to talk with day in and day out. This meant, among other things, noticing a lot more the retrocomputing trend: a number of channels are either dedicated to talk about both games from the 80s and 90s and computers from the same era, or they seem to at least spend a significant amount of time on those. I’m clearly part of the target audience, having grown up with some of those games and systems, and now being in my 30s with disposable income, but it does make me wonder sometimes about how we are treating the nostalgia.

One of the things that I noted, and that actually does make me sad, is when I see some video insisting that old computers were better, or that people who used them were smarter because many (Commodore 64, Apple II, BBC Micro) only came with a BASIC interpreter, and you were incentivised to learn programming to do pretty much anything with them. I think that this thesis is myopic and lacks not just in empathy, but also in understanding of the world at large. Which is not to say that there couldn’t be good ways to learn from what worked in the past, and make sure the future is better.

A Bit Of Personal History

One of the things that is clearly apparent watching different YouTube channels is that there are chasms between different countries, when it comes to having computers available at an early age, particularly in schools. For instance, it seems like a lot of people in the USA have had access to a PET in elementary or junior high schools. In the UK instead the BBC Micro has been explicitly designed as a learning computer for kids, and clearly the ZX Spectrum became the symbol of an entire generation. I’m not sure how much bias there is in this storytelling — it’s well possible that for most people, all of these computers were not really within reach, and only a few expensive schools would have access to it.

In Italy, I have no idea what the situation was when I was growing up, outside of my own experience. What I can say is that until high school, I haven’t seen a computer in school. I know for sure that my elementary school didn’t have any computer, not just for the students, but also for the teachers and admins, and it was in that school that one of the teachers took my mother aside one day and told her to make me stop playing with computers because «they won’t have a future». In junior high, there definitely were computers for the admins, but no students was given access to anything. Indeed, I knew that one of the laboratories (that we barely ever saw, and really never used) had a Commodore (either 64 or 128) in it. This was the same years that I finally got my own PC at home: a Pentium 133MHz. You can see there is a bit of a difference in generations there.

Indeed, it might sound even strange that I even had a Commodore 64. As far as I know, I was the only one having it in my school: a couple of other kids had a family PC at home (which later I kind of did too), and a number of them had NES or Sega Master Systems, but the Commodore best years were long gone by the time I could read, so how did I end up with one? Well, as it turns out, not as a legacy from anyone older than me, which would be the obvious option.

My parents bought the Commodore 64 around the time I was seven, or at least that’s the best I can date it. It was, to the best of my knowledge, after my grandfather died, as I think he would have talked a bit more sense into my mother. Here’s a thing: my mother has had a quirk for encyclopaedias and other books collection, so when me and my sisters were growing up, the one thing we never missed was access to general knowledge. Whether it was a generalist encyclopedia with volumes dedicated to the world, history, and science, or a “kids’ encyclopedia” that pretty much only covers stuff aimed at preteens, or a science one that goes into details of the state of the art scientific thinking in the 80s.

So when a company selling a new encyclopedia, supposedly compiled and edited locally, called my parents up and offered a deal of 30 volumes, bound in a nice and green cover, and printed in full colour, together with a personal computer, they lapped it up fairly quickly. Well, my mother did mostly, my father was never someone for books, and couldn’t give a toss generally about computers.

Now, to be honest, I have fond memories of that encyclopedia, so it’s very possible that this was indeed one of the best purchases my parents undertook for me. Not only most of it was aimed at elementary-to-junior high ages, including a whole volume on learning grammar rules and two on math, but it also came with some volumes full to the brim of questionable computer knowledge.

In particular, the first one (Volume 16, I still remember the numbers) came with a lot of text describing computers, sometimes in details so silly that I still don’t understand how they put it together: it is here that I first read about core memory, for instance. It also went into long details about videogames of the time, including text and graphical adventures. I really think it would be an interesting read for me nowadays that I understand and know a lot more about computers and games at the time.

The second volume focused instead on programming in BASIC. Which would have been a nice connection to the Commodore 64 if it wasn’t that the described language was not the one used by the Commodore 64 in the first place, and it didn’t really go into details of how to use the hardware, with POKE and PEEK and the like. Instead it tried to describe some support for printers and graphics, that never worked on the computer I actually had. Even when my sister got a (second) computer, it came with GW-BASIC and it was also not compatible.

What the second volume did teach me, though, was something more subtle, which would take me many years to understand fully. And that is that programming is a mean to an end, for most people. The very first example of a program in the book, is a father-daughter exercise in writing a BASIC program to calculate the area of the floor of a room based on triangles and Heron’s Formula. This was a practical application, rather than teaching concepts first, and that may be the reason why I liked learning from that to begin with.

Now let me rant aside for a moment that the last time I wrote something about teaching, I ended up tuning out of some communities because I got tired of hearing someone complain that I cannot possibly have an opinion on teaching materials without having taught in academia. I have a feeling that this type of behaviour is connected with the hatred for academia that a number of us have. Just saying.

You may find it surprising that these random volumes of an encyclopedia my mother brought home when I could barely read would stay this long with me, but the truth is that I pretty much carried them along with me for many years. Indeed, they had two examples in the book that I nearly memorized, that were connected to each other. The first was a program that calculated the distance in days between two dates — explaining how the Gregorian calendar worked, including the rules for leap years around centuries. The second used this information to let you calculate a “biorhythm” that was sold as some ancient greek theory but was clearly just a bunch of “mumbo-jumbo” as Adam Savage would say.

The thing with this biorhythm idea, though, is that it’s relatively straightforward to implement: the way they describe it is that there’s three sinusoidal functions that set up three “characteristics” on different period lengths, so you calculate the “age in days” and apply a simple mathematical formula, et voilà! You have some personalised insight that is worth nothing but some people believe in. I can’t tell for sure if I ever really believed in those, or if I was just playing along like people do with horoscopes. (One day I’ll write my whole rant on why I expect people may find horoscope sign traits to be believable. That day is not today.)

So, having a basis of something to lay along with, I pretty much reimplemented this same idea over, and over, and over again. It became my “go to” hello world example, and with enough time it allowed me to learn a bit more of different systems. For example, when I got my Pentium 133 with Windows 95, and one of the Italian magazines made Visual Basic 5 CCE available, I reimplemented the same for that. When the same magazine eventually included a free license of Borland C++ Builder 1.0, as I was learning C++, I reimplemented it there. When I started moving to Linux more of the time and I wanted to write something, I did that.

I even got someone complaining that my application didn’t match the biorhythm calculated with some other app, and I had to find a diplomatic way to point out that there’s nothing scientific with either of thsoe and why should they even expect two apps to agree with it?

But now I’m digressing. The point I’m making is that I have, over the years, kept the lessons learned from those volumes with me, in different forms, and in different contexts. As I said, it wasn’t until a few years back that I realized that for most people, programming is not an art or a fun thing to do in their spare time, but it’s just a mean to an end. They don’t care how beautiful, free, or well designed a certain tool is, if the tool works. But it also means that knowing how to write some level of software gives empowers. It gives people power to build the tools they don’t have, or to modify what is already there but doesn’t quite work the way they want.

My wife trained as a finance admin, she used to be an office manager, and has some experience with CAFM software (Computer Aided Facilities Management). Most CAFM suites allow extensions in Python or JavaScript, to implement workflows that would otherwise be manual and repeating. This is the original reason she had to learn programming: even in her line of work, it is useful knowledge to have. It also comes with the effect of making it easier to understand spreadsheets and Excel — although I would say that there’s plenty of people who may be great at writing Python and C, but would be horrible Excel wranglers. Excel wrangling is its own set of skills and I submit to those who actually have them.

So Were Old Computers Better?

One of the often repeated lines is that old computers were better because either they were simpler to understand in one’s mind, or because they all provided a programming environment out of the box. Now, this is a particularly contentious point to me, because pretty much every Unix environment always had the same ability of providing a programming environment. But also, I think that the problem here is that there’s what I would call a “bundling of concerns”.

First of all, I definitely think that operating systems should come with programming and automation tools out of the box. But in fact that has (mostly) been the case since the time of Commodore 64 for me personally. On my sister’s computer, MS-DOS came with GW-BASIC first (4.01), and QBasic later (6.22). Windows 98 came with VBScript, and when I first got to Mac OS X it came with some ugly options, but some options nonetheless. The only operating system that didn’t have a programming environment for me was Windows 95, but as I said above, Visual Basic 5 CCE covered that need. It was even better with ActiveDesktop!

Now, as it turns out, even Microsoft appears to work to make it easier to code in Windows, with Visual Studio Code being free, Python being available in the Microsoft Store, and all those trimmings. So it’s hard to argue that there aren’t more opportunities to start programming now than there were in the early ’90s. What might be arguable is that nowadays you do not need to program to use a computer. You can use a computer perfectly fine without ever having learnt a programming language, and you don’t really need to know the difference between firmware and operating system, most of the time. The question becomes, whether you find this good, or bad.

And personally, I find it good. As I said, I find it natural that people are interested in using computers and software to do something, and not just for the experience of using a computer. In the same way I think most people would use a car to go to the places they need to go to, rather than just for the sake of driving a car. And in the same spirit of the car, there are people who enjoy the feeling of driving even when they don’t have a reason to, and there are people who find unnecessary things to be required when it comes to computers and technology.

I wish I found it surprising, but I just find it saddening that so many developers seem to be falling into the trap of thinking that just because they became creative by writing programs (or games, or whatever), the fact that computer users stopped having to learn programming means that they are less creative. John Scalzi clearly writes it better than me: there’s a lot of creativity in modern devices, even those that are attacked for being “passive consumption devices”. And a lot of that is not about programming in the first place.

What I definitely see is a pattern of repeating the behaviour of the generation that came before us, or maybe the one who came before them, I’m not sure. I see a number of parents (but thankfully by no mean all of them), insisting that since they learnt their trade and their programming a certain way, their kids should have the same level of tools available, no more and no less. It saddens me, even sometimes angers me, because it feels so similar to the way my own father kept telling me I was wasting my time inside, and wanted me to go and play soccer as he did in his youth.

This is certainly not only my experience, because I have talked and compared stories with quite a few people over the years, and there’s definitely a huge amount of geeks in particular who have been made fun of by their parents, and left scarred by that. And some of them are going to do the same to their kids, because they think their choice of hobbies is not as good as the ones we had in the good old days.

Listen, I said already in the past that I do not want to have children. Part of it has always been the fear of repeating the behaviour my father had with me. So of course I should not be the one to judge what others who do have kids do. But I do see a tendency from some, to rebuild the environment they grew up in, expecting that their kids would just pick up the same strange combination of geekiness they have.

At the same time I see a number of parents feeding the geekiness in their children with empowerment, giving them tools and where possible a leg up in life. Even this cold childfree heart warms up to see kids being encouraged to learn Scratch, or Minecraft.

What About All The Making, Then?

One of the constant refrains I hear is that older tools and apps were faster and more “creative”. I don’t think I have much in terms of qualifications to evaluate that. But I’m also thinking that for the longest time, creativity tools and apps were only free if you pirated them. This is obviously not to dismiss the importance of FLOSS solutions (otherwise why would I still be writing on the topic?) but the fact that a lot of the FLOSS solutions for creativity appear to have a similar spirit to the computers in the ’80s: build the tools you want to be creative.

I’m absolutely sure that there will be people arguing that you can totally be creative with Gimp and Inkscape. I also heard a lot more professionals laughing in the face of such suggestions, given the lack of important features that tools like that have had in comparison with proprietary software for many years. They are not bad programs per se, but they do find their audience in a niche compared to Photoshop, Illustrator, or Affinity Designer. And it’s not to say that FLOSS tools can’t become that good. I have heard the very same professionals who sneered (and still sneer) at Inkscape, point out how Krita (which has a completely different target audience) is a fascinating tool.

But when we look back at the ’90s, not even many FLOSS users would consider Gimp an useful photo-editing tool. If you didn’t have the money for the creativity, your option was most likely chosen between a pirate copy of Photoshop, or maybe if you’re lucky and an Italian magazine gifted it out, a license for Macromedia xRes 2.0. Or maybe FreeHand. Or Micrografx Windows Draw!.

The thing is, a lot of free-but-limited tools online are actually the first time that a wide range of people have finally been able to be creative. Without having to be “selected” as a friend of Unix systems. Without having to pirate software to be able to afford it, and without having to pony up a significant investment for something that they may not be able to make good use of. So I honestly welcome that, when it comes to creativity.

Again: the fact that someone cannot reason around code, or the way that Inkscape or Blender work, does not mean that they are less creative, or less skilled. If you can’t see how people using other tools are being just as creative, you’re probably missing a lot of points I’m making.

But What About The Bloated Web?

I’ve been arguing for less bloat in… pretty much everything, for the past 17 years on blogs and other venues. I wrote tools to optimize (even micro-optimize in some cases) programs and libraries so that they perform better on tiny systems. I have worked on Gentoo Linux, that pretty much allows you to turn off everything you can possibly turn off so you can build the minimalistic system you can think of. So I really don’t like bloat.

So is the web bloated? Yes, I’d say so. But not all of it is bloat, even when people complain about it. I see people suggesting that UTF-8 is bloat. That dynamic content is bloat. That emojis are bloat. Basically anything they don’t need directly is bloat.

So it’s clearly easy to see how your stereotypical 30-something US-born-and-raised, English-only-speaking “hacker” would think that an unstyled, white-on-black-background (or worse, green-on-black) website in ASCII would be the apotheosis of usable web. But that is definitely not what everyone would find perfect. People who speak languages needing more than ASCII exist, and are out there. Heck, people for whom the actual bloat from UTF-8 (vs UTF-16) is the wasteful optimization for ASCII are probably the majority of the world! People who cannot read on black backround exist, and they are even developers themselves at times (I’m one of them, which is why all my editors and terminals use light backgrounds, I get migraines from black backgrounds and dark themes).

Again, I’m not suggesting that everything is perfect and nothing needs to change. I’m actually suggesting that a lot needs to change, but it is not everything needs to change. So if you decide to tell me that Gmail is bloated and slow and use that as the only comparison to ’90s mail clients, I would point out to you that Gmail has tons of features that are meant for users not to shoot themselves in the feet, as well as being a lot more reliable than Microsoft Outlook Express or Eudora (which I know has lots of loyal followers, but I could never get behind myself), and also that there are alternatives.

Let me beat this dead horse a bit more. Over on Twitter when this topic came up, I was given the example of ICQ vs Microsoft Teams. Now the first thing is, I don’t use Teams. I know that Teams is an Electron app, and I know that most Electron app are annoyingly heavy and use a ton of resources. So, fair, I can live with calling them “bloated”. I can see why they chose this particular route, and disagree with it, but there is another important thing to note here: ICQ in 1998 is barely comparable with a tool like Teams, that is pretty much a corporate beast.

So instead, let’s try to compare something that is a bit more close: Telegram (which is already known I use — rather than talking about anything that I would have a conflict of interest on). How fast is Telegram to launch on my PC? It’s pretty much a single click to start and it takes less than a second on the beast that is my Gamestation. It also takes less than a second on my phone. How much did ICQ take to load? I don’t remember, but quite a lot longer because I remember seeing a splash screen. Which may as well have been timed to stay on the screen for a second or so because the product manager requested that, like it happened at one of my old jobs (true story!)

And in that, would ICQ provide the same features of Telegram? No, not really. First of all, it was just messages. Yes it’s still instant messaging and in that it didn’t really change much, but it didn’t have the whole “send and receive pictures” we have on modern chat applications, you ended up with having to do peer-to-peer transfers and good luck with that. It also had pretty much *no* server-side support for anything, at least when I started using it in 1998: your contact list was entirely client-side, and even the “authorization” to add someone to your friend list was a simple local check. There were plenty of ways to avoid these checks, too. Back in the day, I got in touch with a columnist from the Italian The Games Machine, Claudio Todeschini (who I’m still in touch with, but because life is strange and we met in person in a completely different situation many, many years later); the next time I re-installed my computer, having forgotten to back up ICQ data, I didn’t have him in my contacts anymore, and unsure on whether he would remember me, I actually used a cracked copy of ICQ to re-add him to my contacts.

Again, this was the norm back then. It was a more naive world, where we didn’t worry that much about harassment, we didn’t worry so much about SWATing, and everything was just, well, simpler. But that doesn’t mean it was good. It only meant that if you did worry about harassment, if someone was somehow trying to track you down, if the technician at your ISP was actually tapping your TCP sessions, they would be able to. ICQ was not encrypted for many years after I started using it, not even c2s, let alone e2e like Telegram secret chats (and other chat clients) are.

Someone joked about trying to compare running software on the same machine to see the performance fairly, but that is an absolute non-sequitur. Of course we use a lot more resources in absolute terms, compared to 1998! Back then I still had my Pentium 133MHz, with 48MiB of RAM (I upgraded!), a Creative 3D Blaster Banshee PCI (because no AGP slots, and the computer came with a Cirrus Logic that was notorious for not working well with Voodoo 2), and a Radio card (I really liked radio, ok?). Nowadays, my phone has a magnitude or two more resources, and you can find 8051s just as fast.

Old tech may be fascinating and easier to get into when it comes into learning how it all fits together, but the usable modern tech is meant to take trade offs toward the users more and more. That’s why we have UIs, that’s why we have touch inputs, that’s even why we have voice-controlled assistants, much as a number of tech enthusiasts appear to want to destroy them all.

Again, this feels like a number of people are yelling “Kids these days”, and repeating how “in their days” everything was better. But also, I fear there are a number of people who just don’t appreciate how a lot of the content you see on YouTube, particularly in the PC space of the ’90s and early ’00s, is not representative of what we experienced back then.

Let me shout out to two YouTubers that I find are doing it right: LGR and RetroSpector78. The former is very open to point out when he’s looking at a ludicrous build of some kind, that would never be affordable back in the day; the latter is always talking about what would be appropriate for the vintage and usage of a machine.

Just take all of the videos that use CF2IDE or SCSI2SD to replace “spinning rust” hard drives of yonder. This alone is such a speed boost on loading stuff that most people wouldn’t even imagine. If you were to try to load a program like Microsoft Works on a system that would be perfect for the time except for the storage, you would be experiencing a significant different loading time than it was back in the day.

And, by the way, I do explicitly mean Microsoft Works, not Office because, as Avery pointed out on Twitter, that was optimized for load speed — by starting a ton of processes early on, trading memory usage for startup speed. The reason why I say that is because, short of pirated copies of Office, most people in the ’90s that I know would be able to use at best Works, because it came pre-installed on their system.

So, What?

I like the retrocomputing trend, mostly. I love Foone’s threads, because one of the most important things he does is explain stuff. And I think that, if what you want is to learn how a computer works in detail, it’s definitely easier to do that with a relatively uncomplicated solution first, and build up to more modern systems. But at the same time, I think there is plenty of abstraction that don’t need to be explained if you don’t want to. This is the same reason why I don’t think that using C to teach programming and memory is a great idea: you need to know too much of details that are not actually meant to be understood for newcomers.

I also think that understanding the techniques used in both designing, and writing software for, constrained systems such as the computers we had in the ’80s and ’90s does add to the profession as a whole. Figuring out which trade off was and was not possible at the time is one step, finding and possibly addressing some of the bugs is another. And finally there is the point we’re getting to a lot lately: we can now build replacement components with tools that are open to everyone!

And you know what? I do miss some of the constrained systems, because I have personal nostalgia for them. I did get myself a Commodore 64 a couple of years ago, and I loved the fact that, in 2021, I can get the stuff I could have never afforded (or even didn’t exist) back when I was using it: fast loaders, SD2IEC, a power supply that wouldn’t be useful as a bludgeoning instrument, and a SCART cable to a nice and sharp image, rather than the fuzzy one when using the RF input I had to.

I have been toying with the idea of trying to build some constrained systems myself. I think it’s a nice stretch for something I can do, but with the clear note that it’s mostly art, and not something that is meant to be consumed widely. It’s like Birch Books to me.

And finally, if you only take a single thing away from this post, is that you should always remember that an usable “bloated” option will always win over a slim option that nobody but a small niche of people can use.

Blogging From An Onyx Boox Max Lumi

Some time ago, I found a video from Technology Connections over on YouTube about using eInk tablets for productivity. It’s part one of a number of other videos exploring the usage of electronic ink (or electronic paper) displays in Android tablet, that allow an experience that is a compromise between a fully featured Android tablet, and a Kindle-like device.

This piqued my interest, which is not surprising given that, like Alec, I have been an early adopter of ebooks, suffering through the pain of my Sony PRS-505 before landing on Amazon’s love-hated Kindle. But in addition to the ideas that he showed in the videos, I was also already considering whether to get myself an ePaper drawing tablet to use to take notes and doodle diagrams to share on the blog, although I was considering the reMarkable rather than the Onyx at that point.

The main reason why I was mostly considering rather than going for it, was that it’s not a small investment. As I pointed out in a previous post, despite now having a significant easier access to funds, I’m trying to balance the investment on my personal visibility with having time (and resources) for my wife. Buying a device that is mostly to draw diagrams on is very much overkill, if what you do for a living is not drawing diagrams left and right. And while the idea of being able to doodle on eBooks was singing to my inner geek, I knew that it wouldn’t have been nearly as useful as the Kindle is to me.

Things changed when I got to the point of Alec’s videos in which he points out how he used the device with a bluetooth keyboard to work on his (video) scripts, to avoid tiring his eyes as much with a monitor. That spoke to me, and to my wife, who fell just short of ordering it for me, and instead insisted I get one. Which I did, together with a stand (which I will admit I don’t like). I already had a keyboard (a Microsoft Surface Ergonomic Keyboard that I bought a few years ago in the Microsoft Store in Seattle, and that I think could have been much better, but is still better than your average bluetooth keyboards).

In terms of which device to get I went for the highest end that was available, the Onyx Boox Max Lumi. The reason for that is once again to do with our old friend the compromise: a bigger device is harder to bring around, but I don’t think I’d be blogging on the go that much with this device. If I am going to be doing that, for instance at my mother’s house, if we ever get to see her this year, I would be using my laptop, most likely. As such, getting a bigger screen means having more space to draw diagrams. But in particular the 13″ suggested it would be possible to use it horizontally to write a blog while reading off a different source, although that one doesn’t look like it’s going to be feasible any time soon (give me a moment to get to that).

My original intention was to use this primarily through the WordPress application, because I thought it would be easier to use the block editor in that. Unfortunately, it looks like the WordPress app is not only not suitable for the Onyx Boox but it’s not suitable for many tablets either. The problems with the interface itself can be ignored, for the most part, if you just want to use it to type in blog posts. But because of the background color effect on the block editor, the text in the block editor appears hard to read with the “halo effect” to indicate the contrast.

Thankfully, similarly to what is reported in Technology Connections video, both Firefox (to a point) and the website (to a point) are usable enough to not make this a waste of time and money. Unfortunately, I think there is a long way to go to make this a much better platform for blogging.

The first question to ask, is whether the display is even fast enough for using with a browser, and the answer is a resounding yes. Since the videos, it looks like Onyx has improved significantly the handling of refresh modes, introducing two “faster” refresh rates that come with more ghosting, but allowing a much smoother operation of scrolling. They automatically enable these modes when scrolling, and they even introduced a one-tap “refresh the screen now” mode, that is one of the requests Alec makes in the videos. So either Onyx is taking that feedback directly, or they have otherwise reached the same conclusions.

Despite the screen allowing extremely fast “drawing” (minus ghosting), it looks like applications needed to be designed somewhat explicitly for them to make use of those capabilities. The included note app is a pleasure to draw on. The Italian crosswords magazine La Settimana Enigmistica, on the other hand, is disappointingly slow, not showing the traced writing in real time at all (I’m still not sure why they don’t optimize more for tablets of this kind — I would argue that they could easily sell a branded tablet with EMR pens, and it would maybe not fly off the shelf but definitely sell to a number of hardcore fans; I’d get one for my mother for sure).

As I said, I’m using this with a keyboard, and this is where the next problem is: Android is kind of terrible when it comes to physical keyboards. They work, mostly, but everything is a bit odd around them. So for instance, the default AOSP-based keyboard supports dead keys for grave (`) but not for single and double quotes (‘ and “), and it turns out I rely on those being dead keys a lot — all of my typing would look off without it.

I found a way around this, with a £2 application that allowed me to configure individual dead keys behaviour, but it doesn’t quite solve it for me. The next thing I would be needing is a way to have a compose-like behaviour that would allow me to access em- and en-dashes. I am actually considering two options for that: the first is to create a custom layout, the way the tool I bought allows me to (very complicated and annoying, and reminds me of having to reinvent US International on Mac OS X with Ukelele), the second would be to figure out how hard it is to take the AOSP keyboard and make an US International keyboard with compose behaviour. Of course if anyone is aware of that already existing, I’m happy to take it. Open source would be an advantage (to fix if something doesn’t quite work the way I want it to), but I’m happy to take something closed and paid to avoid having to deal with side-loading if it exists.

But the most annoying problem with the keyboard might not even be a problem of Android itself. It might be a problem of Firefox, or WordPress, or something in the whole unlikely setup: sometimes when I navigate to a different point in the post, and try to edit it, characters are inserted in different places. Often at the end of the post, sometimes the location I had just moved a few seconds before. This is not constant, and it happens no matter whether I use the keyboard to navigate, or I tap on the page. Given I’m using the web version of WordPress, it might be a browser-JavaScript-Uncommon setup problem, but it is aggravating. I have tried Edge, to see if it would make things better, but despite being actually able to give a much better experience, closer to a normal desktop (including the presence of as-you-type spellchecker), its insistence in zooming on the WordPress interface as I want to type on it makes it impossible to use.

As you probably noticed by now, I have not really used this to make diagrams yet. I’m not sure I have anything this very moment that would benefit from me drawing a diagram of things, although I do have a couple of ideas for later. For now I’ve only used it to scribble and try out the Staedler Noris Digital (which I got for crosswords on the Samsung tablet instead). Again, this needs specific support for this type of devices, and so most of the tools around this do not work. Microsoft OneNote is unusable due to latency; Jamboard is a bit better but nowhere close to the original Notes app; and Squid is nearly usable, but again, the internal Notes and Books apps are so much faster.

One thing that is definitely clear from using this device for a week or so, is that for this device class to be a killer, we need applications to optimize for it. It might sound like an empty phrase, given that I do not work on any application that might be involved, but I also think that a number of these different device classes have come up over the years, and sometimes they managed to establish themselves. When I first got an Archos Android TV device, it was nearly one of a kind. It wasn’t easy to use, and it wasn’t particularly powerful. And because it was pretty much based on the Samsung Galaxy design, it ended up going stale pretty quickly after. On the other hand, there are now a number of Android-based TV devices, not last Amazon’s own FireStick. So the class itself didn’t disappear despite the one device being, frankly, a failure.

Similarly, while Samsung was first to market (or at least the loudest to market) with high-precision, pressure sensitive pens with their Galaxy Note series, the same technology is now in use by a number of other manufacturers, including this very device, and supported by a number of different applications. So I do not see why, in 2021, we shouldn’t be expecting more applications to make use of the tools that are available, even if it’s for a small percentage of users for now. So if anyone is aware of any Android application that makes use of the capabilities for fast drawing on this and other devices, even if it’s a paid app, please let me know.

I have just used this device now for about a week as I finish drafting this post. My impression is that it was a good investment for my eyes, particularly as working from home during a pandemic does mean not resting my eyes as much as I used to (no more stepping from the desk to grab a coffee with Luke when something irks me, no more hour-and-change break after going through the Californian email, while commuting to the office). It does mean I’m not actually resting my eyes as much, but it does mean I don’t tire them as hard as I used to.

There is also one more interesting thing here, which is the fact that, for the blog, a vertical monitor makes a lot more sense than a horizontal one! Unfortunately, it looks like it’s still hard to get keyboard covers that allow you to use the monitor vertically, and let’s not even talk about vertical laptops. But the truth is that for a long, rambly document, the vertical space is more important than the horizontal. WordPress’s own editor does not really scale to fill the whole horizontal space when you use it on a normal monitor, but it works like a charm on the “mobile” editor as loaded by Firefox.

There are a few more things that would make the overall experience more “professional”. As I said, if the WordPress app was usable, it would be much easier to type, rather than having to deal with the silly “cursor is in one place, characters appear somewhere else” situation. And if Edge didn’t randomly zoom in the wrong place, it would probably be the best browser to use on this particular devices, including the ability to switch tabs on Ctrl-Tab (with the external keyboard of course).

The other thing this just may be usable for is coding. Not the “full software engineering” type of coding, but if you are working on, say, adding documentation to an extensive codebase, it might be a good thing to have at hand, if nothing else because it’s vastly distraction free, and makes for an easy to read screen. You could say that this is the modern equivalent of using monochrome displays with a Commodore 64 to get the sharper fonts.

At any rate, you can probably expect more blog posts and possibly Twitter questions and rants, over the next few months, as I try my best to keep up with the blogging as the lockdown finally eases, and I can finally go out and enjoy my cameras, too.

Country You Go, Banking You Find

If you have been following this blog for a long enough time, you probably know I “enjoy” writing about banks, or at least I end up doing that quite a lot, whether I enjoy it or not. Part of that it’s because I still find myself reason about money the way I did when I had my own company, and part of that because, well, I have opinions of what good banking looks like to me. Part of the reason why I have that opinion, is that I have seen a lot of what good banking is not, and it gives me material to keep writing.

Of course, I’m also lucky because I have at this point seen how banking works (or fails to) in multiple countries, and I can at least compare the point of view of an user in those contexts. I don’t have any idea how this works from the other side of the fence of course, as I have (thankfully) never worked for a financial institution. Not that I haven’t considered that (hey, I live in London after all), but then I remember that I would probably be finding that things are even more screwed up than I can see from the outside, and decide to put my savings into gold bullions — as it stands, I just keep getting depressed by everything tech, and been considering turning to opening a coffee store and bakery.

And I do mean it, when I say that different countries have… pretty much nothing in common when it comes to banks and banking — costs, services, expected behaviour and contacts. Some of it appears to be so entrenched culturally, that suggesting changing something would be probably considered heresy.

Italy

In Italy it is (or at least was) common for current accounts to have a monthly fee, although somewhat nominal, and this usually doesn’t include much more than a bank card. I used to have a cheques book — but only because at the time, my family used to use those for a lot of things. I think I only ever used once, and I’m not remembering what for.

Outbound wires were expensive back when I was using them, raising in price over time, and differed whether I would be wiring to the same bank or to a different one. Cashing in cheques was free, as long as they were in the same currency, while receiving inbound wires depended on both currency and source of the wire — when I was a contractor working for Google on ChromeOS, the invoice payments arrived in dollars, but from within Europe, and they didn’t cost me enough “to notice”; when I received the payments from another customer, they dropped hundreds of euro on the bank’s coffers — which is why I am happy that TransferWise exists now.

While Italy had early federated ATMs that allows you to access your money at any time, there is one thing that I have noticed is not common elsewhere: accessing ATMs outside of your bank’s own is generally a paid option. And I don’t mean that in the USA sense (that I’ll get to later) of ATMs that charge you a fee to use them — I mean that the bank will charge you a fee to use an ATM of a competitor. This got to the ironic part where, when visiting Italy after moving to Dublin, it was cheaper for me to use an Irish debit card to get some cash out of a machine, because my Italian bank didn’t have any ATM in an easily-reachable place for me to use. This might be annoying to me, but it’s a significant pain for people like my mother who don’t drive, and that need to be driven to another town to find a free machine.

Another interesting note about Italian bank cards is that, up until very recently, were not usable online. I think nowadays most banks have at least a paid option to give you a normal Visa or MasterCard debit card, but for a long time you ended up with cards that couldn’t be used online at all — even when they had 19 digits on them for the Maestro circuit. Indeed, back when I hit these issues I did ask whether I should have gotten a VPay (Visa’s answer to Maestro, before Visa Debit), but I was told that that would have been less accepted in Italy itself.

Credit cards are still uncommon — before bank cards could be used online, the vast majority of people I knew who made purchases on eBay would be using pre-paid cards (Visa Electron or MasterCard Debit), that you topped up for a fee. The most common one at least back then was issued by the Italian post (Poste Italiane), under the name PostePay — it was also one of the biggest scam targets for many, as many eBay sellers would pretend that the PostePay logo appearing on the announcements meant that they would pay directly to a pre-paid card, rather than through PayPal – a great scam, since eBay can’t see the money changing hands, and won’t protect you from fraudsters. At least they seem to have abandoned chip’n’sign.

On the other hand, direct debits have been a thing for a long while… although that became an issue when SEPA Direct Debit became a thing. Indeed, the old Italian direct debit system was advanced enough (and similar enough to the SEPA DD Core specifications) that it appears most banks and operators just re-used the same infrastructure, just changing the size of the fields used as parameters. This worked well as long as you were not trying something as silly as direct-debiting an Italian utility to an Irish bank account or vice-versa. And I know that because I tried.

To be honest, though, not everyone is using direct debits still. Before my parents split up and I started being the one paying for the bills, my father insisted on not using direct debits at all, and instead paid the bills at the post office — and since a number of times the bills arrived past due, we ended up paying quite a lot of money in interests. The pay-to-the-post-office thing is also a fairly standard concept of Italian culture, at least up to the ’90s. I don’t think any of my age-peers are doing that anymore, particularly because post offices are pain to get to: in many town you can only get to them in the morning, and not over the weekend.

Indeed we got to the point that a lot of banks offer (or at least used to offer) a postal payment service so you could put in your request for a postal payment online… and then receive the paper receipt by snail mail, because what they pretty much did was forwarding the request to a local post office, with batch-printed money order forms!

Ireland

Given the fact that Ireland is part of the EU, and thus SEPA, you may think that things are mostly the same between Ireland and Italy. You would be wrong.

First of all, bank accounts in Ireland are a mess to choose from, because none provides any decent service. The one account I was given when I landed, through Google on AIB, was one of the most commonly used one, and required €2500 in the account at every single day of the quarter, or otherwise it would charge you for each operation you took… including received wires. Note that this is a daily minimum and not an average-over-the-quarter, like the equivalent would be in Italy. And goes without saying that it’s a no-interests account, so you need to put some money “to sleep” to avoid being charged through the nose.

The last account I settled on was an Ulster Bank premium service account at €36/month. It actually paid for itself in terms of a lot less time spent dealing with straightening things out (after a horrible situation with KBC, I really wanted someone that could do that stuff for me). And it came with a secondary Sterling account (via Ulster Bank Northern Ireland — I still have that account!), as well as a savings account and credit card.

Thankfully, Ireland abandoned the country-limited bank card system called Laser before I immigrated, in favour of using more standard Visa Debit cards. With the exception of KBC that, as far as I know, was the only issuer of MasterCard Debit bank cards. A number of other non-bank services issued MasterCard Debit cards: Revolut, Curve, and An Post’s money exchange services — the latter being ironic given that An Post was the only place I knew of that did not accept MasterCard Debit cards, only Visa Debit.

As far as I know, none of the big banks issue bank cards that have fees to take money out of a non-bank-owned ATM, to the point that I never bothered to go anywhere else but the two Spar supermarkets around my place, when I needed cash.

Direct debit is the norm, although some system of postal payment is still present, and a number of bills would have the details printed on the bottom. Ironically, it’s because of that, that once I leaked my own credit card number. Overall, the Irish banking system appears to me fairly straightforward, with most payment being executed electronically, and widespread card acceptance.

An interesting note about direct debits is that, despite nominally being part of the SEPA Direct Debit Core system, they appear to be vastly region-locked. I had to threaten Tesco Bank (before they sold their business to AvantCard) to bring it up with the regulator when they refused to let me direct debit an account with a non-Irish IBAN, and Ulster Bank (Ireland) didn’t budge even then. A few of the utilities, also, appear to still be unable to change the direct debit dynamicall: you instead choose how much you want to pay, and then they will issue you a statement to show how much in credit/debit you are.

What I would say is still Ireland’s biggest problem when it comes to banking is the lack of competition. It’s the reason why Revolut actually works well there: there is no high-street competition and those who are used to different level of service will go straight to FinTech offers. My impression is that the reason for the lack of competition is that it relates to the way folks stick to the bank their parents used, something that I have encountered in Italy a lot as well.

England

I nearly titled this section United Kingdom, but then I realized that there are a few things that don’t quite work the same way throughout the country. Which is something that became very apparent when I transfered from Dublin to London: in the Workday interface, when it asked me for a “UK bank account”, I provided an Ulster Bank (NI) account number, and that had me wait for another two weeks (with a roundtrip to payroll to change my account on file), because Northern Irish bank accounts are not compatible with most English payment systems, it appears.

On the bright side, the competition between banks is fierce, although there are a number of “branded” accounts that consumers don’t always notice are operated by the same institution. There are also free-by-mandate bank accounts made available by a number of well-known banks, although that is becoming an increasingly limited space.

In my experience, this is one of the most consumer-friendly banking system, with payments between accounts, whether private or business, being free or nearly free, and direct debits being ubiquitous. The best of all is that so-called “Faster Payments” transfers appear on the credited accounts nearly instantaneously, in a matter of minutes if not seconds, without any surcharge. Italy does have similar fast payments, but in their case, it usually comes at an additional cost.

Otherwise, the English system looks a lot like the Irish one, at least for now. Bank cards are issued usually on the VISA network, although I know of at least one bank issuing Mastercard debit cards. More recent ones stopped providing the embossed digits for the fully-offline usage, which is fair as the only place that I saw using those in the past few years has been a hotel in Tokyo.

Credit cards are an intersting story here. They are not expensive, honestly, but they are very, shall we say, selective. If you just moved into the country, you’re not going to get a credit card for a number of months, which is again similar to Ireland, although I did manage to get one relatively fast with American Express (which, of course, is not cheap by itself). Once you’re allowed to get a card, the price of it is usually recovered by cashback. Or you may choose to get one of the cards that cost nothing but don’t get a cashback at all. Personally, I decided to get two cards: Santander’s with generic cashback, and Amazon’s (by NewDay) with Amazon-only, points-based cashback; the former is paid back by the regular payments we have on it, and the latter was free, and adds up a few scores of pounds a year.

Compared to the time I spent there, the one thing that England appears to have that Ireland missed (and, as far as I know, so does Northern Ireland), is the concept of “retailer offers”. For those unaware of these, many banks include “selected offers” with their bank accounts or credit cards, which you usually need to explicitly opt into. The way you do that, is usually through their normal mobile banking app (or website), although I think a couple of banks have separate apps for those.

These offers are usually in the form of anything between 2% and 10% cashback for purchases over a certain amount up to a maximum. Depending on the bank, these are offered either in actual cash, additional “points”, or separate “rewards balance”. This is where things get interesting and complicated, because you end up having to keep in mind which card/account has a certain offer, and you end up then deciding which one to use to pay at a certain place depending on that.

To make the calculation more complicated, the rules are also different between these. My bank (Santander) has a single set of offers on the account, which applies to debit and credit cards alike. And if the offer is for “one-time”, it is consumed as soon as either me or my wife use the card for an eligible transaction. On the other hand, American Express applies the offers on the card, so both of us need to check our app for valid offers, and one-time offers can be used once per card (very useful for offers that instead of a percentage, give you back a fixed amount, like the yearly “Shop Small” week — fun fact: in the beforetimes, it confused a lot of waiters when we would decide to split the bill, particularly when our wedding rings are clearly visible). But at the same time, the banks’ offers can be combined with Airtime Rewards, so you win some, lose some.

Direct debits are another area that includes some calculation space, and some similarities with Ireland. The number of utilities using fixed direct debits is significantly lower, but it’s not entirely uncommon either. On the other hand, a few banks (including Santander) do apply cashback to (certain) direct debits. Or they may decide to give you perks as long as you have direct debits set up. This provides a gameable system: particularly when accounts suggest you can get perks as long as you have two direct debits, you can take a look at your periodic payments and, if they do allow PayPal, you can use then turn a periodic payment into a direct debit. For instance, you may have Spotify and Netflix as those direct debits.

My short version of an impression of the English banking system is of one that is perfect if you are a fan of strategy games or RPGs where you can spend a day just calculating the possible weapon/armor/enhancement combos. You can then decide to squeeze all possible combining offers, get the best exchange rate for each purchase, and so on. If you are not into these trickeries and calculations, well, you can still get a pretty much no frills account and be happy knowing that they are not really mugging you of a lot of money.

What is possibly the most annoying thing, is that the security of logging into the online banking options is fairly horrible. England (and Ireland too) is the home of “Please give us your 3rd, 7th, 38th character of your password” requests, which are horrible security theatre, and add nothing in terms of security, since transparent phishing proxies are not uncommon at all.

United States

For this country, I only have a passing knowledge of the banking system, since I have never lived there proper, but I do have a bank account, so I have experienced some of the workflows, and I can compare notes with a few people I know that have lived there.

The first thing everybody will tell you about US banking system, is that it still heavily relies on paper cheques. Yes, the same cheques that had the main spotlight in movies such as Catch Me If You Can (which in my opinion is a great movie, by the way). What has changed since the time of that movie is that there’s a lot of “virtual process” around cheques in the USA, something that I don’t think other countries have done, for good or bad.

So for instance, I have received myself a few US cheques (mostly as rebates for goods I bought in the USA); to cash them I didn’t have to wait for my next trip to the country: my bank (Chase) allows me to scan the cheques front and back with their mobile phone app and they consider that good enough that I don’t have to present the original to any one at all. I could literally take the cheques and frame them, if they had an emotional meaning for me (they don’t, and I didn’t). And I’m told that on the other hand, you can ask your bank to print out a cheque for you via their online banking, and they will post it out for you just fine. I’m sometimes scared by this strange paradox of still using cheques, but beside being reminded of the William Gibson quote («The future is already here, it’s just not very evenly distributed»), I can only shake my head and thank the luck that “my” banking system doesn’t require me to deal with this.

Electronic payment in stores appear to be also unevenly distributed. While I had no issue with paying by card in most places I’ve been to, in either California, Pittsburgh, or New York, there appears to be still a lot of services that are cash only. One of them, ironically, in Mountain View: Dana Street Coffee Roaster serves really good coffee, but I don’t usually keep cash at hand (except if I’m going to The New Mongolian Barbeque), which is why I usually just hang out at Red Rock unless I’m with a local.

There is also the elephant in the room, that only recently the United States have introduced chip-and-pin for points of sale around the country, and the last time I was there, a lot of them still didn’t support it. The reason for that appears to be related to the way point of sale devices are handled in the USA, where a lot of vendors actually bought the terminals, rather than renting them from a bank or payment processor. This is unlike Europe, where as far as I know, most banks will sign you up as a customers and provide you a leased terminal, which they take responsibility for updating and replacing as needed, which allowed the deployment of new technologies such as chip cards (with either PIN or signature) and NFC payments in a much shorter order than across the pond.

Payments, particularly consumer to consumer, are a mess. The whole concept of direct bank transfers is not something you can consider in the USA, with wire transfers costing $25 at both ends of the transaction (depending on banks and other rules). This may give a bit of context on why Silicon Valley appears to want to reinvent payments every year, with new methods to “attach money” through all kind of messaging platforms, and why people still attach themselves at the idea of bitcoin solving problems related to instantaneous transfers among peers.

On the other hand, online banking systems appear to be a lot more sensible when it comes to login security than European banks. Chase even supports proper password managers in their login both on the website and on the mobile app, which is unheard of in the countries I lived in here in Europe. And they even will use email for OTP rather than the silly SMS. It also appears that retailer offers exist in the USA as well. Chase is always trying to sell me a Dropbox subscription, with a 15% off (and TurboTax, although I don’t actually owe taxes in the US so that would not be particularly useful).

Conclusions

This is by no means a complete and detailed comparisons of all these banking systems. It is most definitely biased and partial, particularly given, as I said, I have not lived in the United States, but only experience it as a passerby.

I have not covered the different credit score systems of these countries (among other things because I do not have a credit score in the USA at all), nor I have talked about how to get loan or mortgages (the latter of which, I never tried getting anyway). All of these are extremely important components of a banking system and they, more than the services to consumers, should be considered when debating its health.

I just find this particular field interesting, and I think that reading more about other countries banking systems might get people to pay more attention to the horrible technological solutions they come up with. Nobody cares about cryptocurrencies for money transmission in Europe because SEPA means you can transfer money for fees that are pretty much nothing by comparison with your average crypto transaction. Noone had more than a passing interest in “attaching money to Gmail” outside of the USA.

I honestly open that in the future I’ll get to know more banking systems. It would mean indeed moving again, but I think it would be interesting to see different parts of the world, too.

Moving Notes #4: Fixing Up A Flat You Rent

If you thought that after signing the lease for the new flat everything would be done and dusted, you’d be wrong. In what is definitely not a surprise to anyone, things didn’t go quite as planned once we started moving in, and not all of it was a fault of our landlord.

When we saw the flat, it was definitely in need of some freshening up, as the previous tenants lived there for a number of year and had clearly given it quite the run down. Most of it was taken care of before we entered, but as you can imagine with a pandemic going on, and the country still being in lockdown months later, not everything was fixed in time, and we have been understanding of it. Indeed, unfortunately a few months later we’re still talking about some of the issues, among other things because we stayed in the same development, and the companies involved in maintenance stayed the same — with the related lack of touch.

Now, this part of notes is going to be fairly generic with lack of details — that’s because the situation with the flat was particularly… peculiar, and our landlord is an individual, not a company, and doesn’t use a management company either. Which means that most of the issues we had to resolve were matters for just the three of us. But there’s a few lessons we learned that I thought I would share with others — because I’m sure many people already know these, but someone won’t, and maybe they’ll get the idea.

Keep Track

When we received the keys to the flat, not all of the work was completed. As I said, the reason for that was due to further lockdown happening in-between our signing and the move, among the other things reducing the pool of workers to fix the various issues.

Because of the number of issues, some minor and some more involved, we started tracking them with a shared spreadsheet. This was a great advantage — particularly as our landlord was very appreciative of the approach, as he could himself tick off the addressed issues as we went by. I’m honestly surprised that ticketing queues for property management companies are not more common, but given the way they worked I’m not surprised. I could really see a business opportunities for private landlords to have access to similar tools, but I also know it’s going to have a lot of abuse thrown into it, so I can’t say I’m surprised.

Take note of when you reported something, include pictures (links to Google Photos or Dropbox shared photos work great), and explicitly suggest what you want your landlord to do to address the situation. For a number of issues, we have explicitly suggested “Just so you know, this was the case, but we can fix it ourselves” — yes, we could have insisted by the terms of the lease that he should be addressing it, but we’ve both been respecting his time (and costs) and our privacy by working it this way. Note that of course any cost associated with extraordinary maintenance is to be born by the landlord — and it’s a privilege that we can afford advancing the money for this.

Try Everything, Quickly

This may be obvious to a number of people, but it still caught us unprepared. We keep adding things to check out at inspection every time we move somewhere. One of the things that I noted in the previous flat was that the extraction fan in the kitchen had two circular active carbon filters that needed replacing, so I did open to check if they needed replacing when we did the inspection… and that’s when we realized that the wire mesh filter needed to be replaced, too — it was completely delaminated.

Among the things that we needed to get fixed or replaced, was the hairtrap in the shower, which we didn’t expect to find damaged with all the “teeth” that are meant to trap hairs simply gone. This was notable because finding the right piece to replace this with turned out to be involved. The manufacturer of the showers in this building (as well as the whole quarter we live in, for what we know) has gone bankrupt pretty much right after the flats were built — although I don’t think there’s any connection between the two facts.

Where we nearly got burnt (pun intended) was the oven: when we moved in, we noted that the light in the oven was not working. It took some time to figure out what the bulb was at all — it was a G9 socket bulb. And for some reasons that are definitely not obvious to me, it won’t turn on if the time on the oven is not set. Don’t ask me why that is the case. What we didn’t question was how the light went out. Well, turns out that it might happen if the temperature of the oven exceeds the expected 350°C.

So here’s a piece of advice: grab an oven thermometer, set the oven at a reasonable temperature (e.g. 220°C which is common for baking), and check what the thermometer is reading. We did that, because when my wife was preparing bread, she was scared by how fast it started browning (and eventually burning), and that didn’t feel right. Indeed once we received the thermometer, we set the oven to 100°C, and after not even five minutes it read 270°C. Yeah that was not good, and the oven had to be replaced.

Now there’s an interesting point to make about replacing white goods in rented flats: as Alec Watson points out, there’s an incentive problem at hand: landlords are paying for the equipment, but not for the bills, so they are incentivised to buy replacement parts at a low cost on the short term even though they are not the most efficient. In this case our landlord did do good to us with the replacement oven, especially given that it was replaced just before lockdown entered into effect this past November — but the oven was definitely “less fancy” than the one that originally came with the apartment. Of course none of the lost features were really a loss for us (somehow these apartments came with very fancy ovens), but it’s a point to note.

Check all the lights, check all the taps, check all the drains. Do that as soon as you can. Make sure you report any drain that even appears to be slow, because it’s likely going to get worse rather than better. Check anything that is fused to make sure it turns on.

Uncommon Cleaning Products

While most flats are meant to be cleaned professionally before you move in, what counts as “clean” is pretty much on the eyes of the beholder. So you probably want to have a good supply of cleaning products when moving in into an apartment no matter what.

There’s a few more items that you may want to have, though. Dishwasher cleaners are widely available nowadays, and in my experience they do improve things a great deal, but washing machine cleaners are not often thought about. We used these in the previous flat and they were instrumental here to get the washing machine ready to be used, because it was in a horrible state when we came in, part because it was never cleaned properly, and part because it stayed shut down for many years.

One of the thing that turned out to be very useful to have, was good old citric acid. This turned out to be something that we didn’t really keep at hand at all, until last year — Dyson recommended it to clean the tank of their Pure Cool+Humidify device, which we bought last year due to the ventilation issues that the flat we were leaving.

Aside: the Dyson was actually a very good purchase, not because of the brand, but because we just had no idea how dry this flat was going to be — without humidifier, we approach 30% over night, and that is not good. And also, we still have ventilation issues — although not as bad as in the previous flat.

Citric acid is used in the Dyson cleaning cycle to descale the pump, but can also be used to descale pretty much anything that needs descaling. You can buy optimized descaling products, for instance for kettles (and you should, if you have a kettle and as hard a water as we have here), but for descaling things like taps and showerheads, it works like a charm, and is overall much cheaper.

You’re likely going to be needing stronger acid to unclog drains — or to remove smell from them. These are available in most grocery stores in different brands so feel free to browse and find them. Keep track of how many you need. This was not a problem here, but when I moved to Dublin I ended up needing the extra strong ones in every single drain. It added up to over €60, and I got a rent discount from my landlord for that.

Also, listen to the advice that the locksmith who had to come in the other week had for us, and WD-40 your locks twice a year. As he put it: «Change your clock, lubricate your locks.» He was not here for the flat’s door (thankfully! — but also, we barely ended up both leaving at the same time since we moved in), but he did check on the fact that the door’s lock was starting to get loose.

Storage Spaces And Cable Management

Managing storage in London tends to come up with a lot of jokes. Utility closets are virtually clogged by boilers and other equipment. What we learnt in the past is that it’s much easier to make space in a closet when most things are in square-ish boxes. So we ended up buying a bunch of boxes from Really Useful Products (we went straight at the source, after disappointing deliveries by both Ryman and Argos), and use them to store things such as cleaning supplies, rarely used tools, and replacement lightbulbs.

The end result of this type of organization is that the volume occupied in the closet is definitely increased, but also it’s easier to move stuff around, and we no longer fear smashing lightbulbs when retrieving the fabric softener. Why do we have so many lightbulbs? Because we ended up replacing the spotlights in my office and in the ensuite with Philips Hue lights. I’ll talk about the ensuite setup another day.

Boxes go well in cabinets, too — particularly if they are too deep to otherwise get stuff out. We’re not to the point of ordering custom made acrylic boxes like Adam Savage – among other things, while work is good, we don’t make TV money – but we might have perused online stores specs to find boxes that fit the nooks and crannies of our kitchen cabinets.

Similarly, vertical space is always nice to recover where you can. If you take for example the air fresheners from AirWick or Glade, most of them can be hang on the wall. And while it’s usually not a good idea to screw stuff in the wall, 3M Command makes it possible to still put them on the wall without risking your deposit money. And they actually tend to work better once you put them at “nose height”. We put one in the bathroom by the door, so when you walk in it sprays and you smell it just right then and there. I have to say I feel I’m getting more bang for the buck than having it on top of the cabinet or to the side of the sink.

Another place where it’s not obvious that vertical space is cheaper than horizontal one is the shower cube. In both flats I lived in London, despite having a good walk-in shower, there’s been no shelf for holding gels and shampoo, and I usually ended up just balancing them on the taps – since installing a shelf is not possible while renting, and using the suction cup ones tend to be more annoying than not. Instead using a shower basket allows to hold the bottles just fine. And some Command strips adds the hooks needed to hold loofahs and the like.

The next problem is that you never have enough plugs for stuff, and even when you do it’s likely in the wrong place. We tried a number of solutions and for this one I have two advices: the first is that if you own your furniture, you can easily screw in a multi-plug extension cord, which makes it so much easier to manage cables.

The other thing that we found that helps quite a bit is these cable-less three-way extensions. They seem safe enough, and give more plugs in spaces where two-gangs are too little, without adding to cable management. We put a few of these around where we have lamps, and where we would like to plug in our vacuum cleaner.

The final cable that needed some sorting out was the network cable. Like most (but not all) of the flats we saw here in London, the network plug used by our ISP (the ever awesome Hyperoptic) is in the living room, by the expected “TV area”, but I would like to have wired network all the way to my office, since I’m on videocall most of the day anyway. This is not an uncommon problem; in Dublin I was physically in the same room, but still had to go through most of the room to get to my desk, while in the previous flat I paid someone to install trunking along the walls and under the doors (before my wife even moved in).

Our landlord suggested just drilling through behind the sofa to the office, but I didn’t feel comfortable doing that wit the number of unknown issues with cabling going on. So instead I ended up looking at alternative options. And here’s where I need to thank Hector once again: when I first visited him in Japan he gave me a tour of Akihabara from the technician point of view and showed me the extra thin ELECOM network cables. That’s the one I used in Dublin too, and that’s the one I used to go below the two doors in this flat — except I needed something longer if I didn’t want to just bridge it inside the living room. So I ended up ordering a 20 meters flat CAT6 cable from Amazon Japan, and that works like a charm: it fits neatly under the doors and doesn’t get snapped on at all.

Conclusions

We’ve been living here six months and we’re still sorting things out, some of it is the fault of the lockdown making it hard to get people in to fix things, some of it because of the building management company failing at responding altogether.

Hopefully, we’ll soon be done and we can consider it settled… although I guess the change in routine once the lockdown finally eases will mean we’ll find more stuff to sort out. So maybe you’ll get more parts to this series.

After The Streams — Conclusion From My Pawsome Players Experiment

A few weeks ago I announced my intention to take part in the Cats Protection fundraiser Pawsome Players. I followed through with seven daily streams on Twitch (which you can find archived on YouTube). I thought I would at least write some words about the experience, and to draw some lines out of what worked, and what didn’t, and what to expect in the future.

But before I drop into dissecting the stream, I wanted to thank those who joined me and donated. We reached £290 worth of donations for Cats Protection, which is no small feat. Thank you, all!

Motivations

There’s two separate motivations to look at when talking about this. There’s my motivation for having a fundraiser for Cats Protection, and then the motivations for me doing streams at all, and those needs to be separated right away.

For what concern the choice of charity – both me and my wife love cats and kittens, we’re childfree cat people. The week happened to culminate in my wife’s birthday and so in a way it was part of my present for her. In addition to that, I’m honestly scared for the kittens that were adopted at the beginning of the lockdown and might now be left abandoned as the lockdown eases.

While adopting a kitten is an awesome thing for humans to do, it is also a commitment. I am afraid for those who might not be able to take this commitment to full heart, and might find themselves abandoning their furry family member once travel results and they are no longer stuck at home for months on end.

I also think that Cats Protection, like most (though not all) software non-profit organizations, are perfectly reasonable charities to receive disposable funds. Not to diminish the importance and effort of fundraisers and donations to bigger, important causes, but it does raise my eyebrow when I see that NHS needs charitable contributions to be funded — that’s a task that I expect the government taking my tax money should be looking at!

Then there’s the motivation for me doing livestreams at all — it’s not like I’m a particularly entertaining host or that I have ever considered a career in entertainment. But 2020 was weird, particularly when changing employer, and it became significantly more important to be able to communicate across a microphone, a camera and a screen the type of information I would usually have communicated in a meeting room with a large whiteboard and a few colour markers. So I have started looking at way to convey more information that don’t otherwise fit written form, because it’s either extemporaneous, or require a visual feedback.

When I decided to try the first livestream I actually used a real whiteboard, and then I tried this with Microsoft’s Whiteboard. I have also considered the idea of going for a more complex video production by recording a presentation, but I was actually hoping for a more interactive session with Q&A and comments. Unfortunately, it looks only a few people ever appeared in the chatrooms, and most of the time they were people who I am already in contact with outside of the streams.

What I explicitly don’t care for, in these streams, is to become a “professional” streamer. This might have been different many years ago — after all, this very blog was for a long time my main claim to fame, and I have been doing a lot of work behind the scenes to make sure that it would give a positive impression to people, and it involved also quite a bit of investment not just in time but in money, too.

There’s a number of things that I know already I would be doing differently if I was trying to make FLOSS development streaming a bigger part of my image — starting with either setting up or hiring a multiplicator service that would stream the same content onto more than just Twitch. Some of those would definitely be easier to pull off nowadays with a full-time job (cash in hand helps), but they would be eating into my family life to a degree I’m no longer finding acceptable.

I will probably do more livestreams in the upcoming months. I think there’s a lot of space for me to grow when it comes to provide information in a live stream. But why would I want to? Well, the reason is similar to the reason why this blog still exists: I have a lot of things to say — not just in the matter of reminding myself how to do things I want to do, but also a trove of experience collected vastly by making mistakes and slamming head-first into walls repeatedly – and into rakes, many many rakes – which I enjoy sharing with the wider world.

Finally (and I know I said there’s two motivation), there’s a subtlety: when working on something while streaming, I’m focusing on the task at hand. Since people are figuratively looking over my shoulder, I don’t get to check on chats (and Twitter, Facebook, NewsBlur), I don’t get to watch a YouTube video in the background and get distracted by something, and I don’t get to just look at shopping websites. Which means that I can get to complete some open source hacking, at least timeboxed for the stream.

Tangible Results

Looking back at what I proposed I’d be doing, and what I really ended up doing, I can’t say I’m particularly happy about the results. It took me significantly longer to do some things that I expected would take me no time whatsoever, and I didn’t end up doing any of the things I meant to be doing with my electronics project. But on the other hand, I did manage some results.

Beside the already noted £290 collected for Cats Protection (again, thank you all, and in particular Luke!), I fully completed the reverse engineering of the GlucoMen areo glucometer that I reviewed last week. I think about an hour of the stream was dedicated to me just poking around trying to figure out what checksum algorithm it used (answer: CRC-8-Maxim as used in 1wire) — together with the other streams and some offline work, I would say that it took about six hours to completely reverse engineer that meters into an usable glucometerutils driver, which is not a terrible result after all.

What about unpaper? I faffed around a bit to get the last few bits of Meson working — and then I took on a fight with Travis CI which resulted in me just replacing the whole thing with GitHub Actions (and incidentally correcting the Meson docs). I think this is also a good result to a point, but I need to spend more time before I make a new release that uses non-deprecated ffmpeg APIs — or hope that one of my former project-mates feel for me and helps.

Tests are there, but they are less than optimal. And I only scratched the surface of what could be integrated into Meson. I think that if I sat down with the folks who knows the internal in a chat I might be able to draw some ideas that could help not just me but others… but it turns out that involves me spending time in chat rooms, and it’s not something that can be focused on a specific time slot a week. I guess that is one use where mailing lists are still a good approach, although that’s no longer that common after all. GitHub issues, pull requests and projects might be a better approach, but the signal-to-noise ratio is too slow in many cases, particularly when half the comments are either pile-on or “Hey can you get to work on this?”. I don’t have a good answer for this.

The Home Assistant stream ended up being a total mess. Okay, on one half of it I managed to sync (and subsequently get merged) the pull requests to support bound CGG1 sensors into ESPHome. But when I tried to set up the custom component to be released I realized that first, I have no idea how to make a Home Assistant custom component repository – there’s a few guidelines if you plan to get your component into HACS (but I wasn’t planning to), and the rest of the docs suggest you may want to submit it to inclusion (which I cannot do because it’s a web scraper!) – and the second is that the REUSE tool is broken on Windows, despite my best efforts last year to spread its usage.

The funny thing is that it appears to be broken because it started depending on python-debian, which mostly reasonably didn’t expect to have to support non-Unix systems, and thus imported the pwd module unconditionally. The problem is already fixed on their upstream repository, but there hasn’t been a release of the package in four months and so the problem is still there.

So I guess the only thing that worked well enough throughout this is that I can reverse engineer devices in public. And I’m not particularly good at explaining that, but I guess it’s something I can keep doing. Unfortunately it’s getting harder to find devices that are not either already well covered, or otherwise resistant to the type of passive reverse engineering I’m an expert in. If you happen to have some that you think might be a worthy puzzle, I’m all ears.

Production and Support

I have not paid attention too much about production. Except for the one thing: I got myself a decent microphone because I heard my voice in one of the previous streams and I cringed. Having worked too many years in real-time audio and video streaming, I’m peculiar about things like that.

Prices of decent microphones, often referred to as “podcasting” microphones when you look around, skyrocketed during the first lockdown and don’t appear to have come quite down yet. You can find what I call “AliExpress special” USB microphones that look fancy studio mics on Amazon at affordable prices, but they pretty much only look the part, not being comparable in terms of specs — might be just as tinny as your average webcam mic.

If you look at “good” known brands, you usually find them in two configurations: “ready to use” USB microphones, and XLR microphones — the latter being the choice of more “professional” environments, but not (usually) directly connected to a computer… but there’s definitely a wide market of USB capture cards and they are not that much more expensive when adding it all together. The best thing about the “discrete” setup (with an XLR microphone and an USB capture card/soundcard) is that you can replace them separately, or even combine more of them at a lower cost.

In my case, I already owned a Subzero SZ-MIX06USB mixer with USB connection. I bought it last year to be able to bring in the sound from the ~two~ three computers in my office (Gamestation, dayjob workstation, and NUC) into the same set of speakers, and it comes with two XLR inputs. So, yes, it turned out that XLR was a better choice for me then. The other nice thing about using a mixer here, is that I can control some of the levels on the analog side — because I have a personal dislike of too-low frequencies, so I have done a bit of tweaking of the capture to suit my own taste. I told you I’m weird when it comes to streaming.

Also let’s me be clear: unless you’re doing it (semi-)professionally, I would say that investing more than £60 would be a terrible idea. I got the microphone not only to use for the livestream, but also to take a few of the meetings (those that don’t go through the Portal), and I already had the mixer/capture card. And even then I was a bit annoyed by the general price situation.

It also would have helped immensely if I didn’t have an extremely squeaky chair. To be honest, now that I know it’s there, I find it unnerving. Unfortunately just adding WD40 from below didn’t help — most of the videos and suggestions I found on how to handle the squeaks of this model (it’s an Ikea Markus chair — it’s very common) require unscrewing most of the body to get to the “gearbox” under the seat. I guess that’s going to be one of the tasks I need to handle soon — and it’s probably worth it given that this chair already went through two moves!

So hardware aside, how’s the situation with the software? Unfortunately, feng is no longer useful for this. And as I was going through options last year I ended up going for Streamlabs OBS for the “It vastly works out of the box” option. Honestly, I should probably replace it with OBS Studio, since I’m not using any of Streamlabs’ features, and I might as well stick to the original source.

As I said above, I’m not planning to take on streaming as a professional image — if I did, I probably would have also invested in licensing some background music or a “opening theme”. And I probably would have set up the stream backgrounds differently — right now I’m just changing the background pictures based on what I shot myself.

Conclusions

It was a neat experiment — but I don’t think I’ll do this again, at least not in this form.

Among other things, I think that doing one hour of stream is sub-optimal — it takes so long to set up and remind people about the chat and donations, and by the time I finished providing context, I was already a quarter of the hour in. I think two to three hours is a better time — I would probably go for three hours with breaks (which would have been easier during the Pawsome Players events, since I could have used the provided videos to take breaks).

Overall, I think that for this to work it needs a bigger, wider audience. If I was in the same professional space I was ten years ago, with today’s situation, I would probably be having all kind of Patreon subscriptions, with the blog being syndicated on Planet Gentoo, and me actually involved in a project… then I think it would made perfect sense. But given it’s “2021 me” moving in “2021 world”… I doubt there’s enough people out there who care about what goes through my mind.