More Chinese Glucometers: Sinocare Safe AQ UG

Years ago, I was visiting Shanghai for work, and picked up a Sannuo glucometer. Somehow that blog post keeps getting the interest of people, and indeed it has better stats than some of my more recent glucometer reviews. I found it strange, until one night, while my wife was playing online, I found myself browsing AliExpress and thought “I wonder if they sell glucometers”. Of course they do.

While browsing AliExpress for glucometers it dawned on me not just why so many people kept finding my blog post, but also the answers to a number of questions I had about that meter, that I don’t think I would have otherwise had answers for. And so, I decided to throw some “craic money” to getting another Chinese glucometer to look at.

So, first of all, what’s going on with my blog post? Turns out that there’s a lot of glucometers on AliExpress. Some are at least branded the same as you would find in Europe or the USA with what looks like official OneTouch and Abbott storefronts on AliExpress — which does not surprise me too much, I already found out that Abbott has a physical store in UAE that sells Libre sensors, which work with my Libre reader but not with the mobile phone app. But for the rest you find a lot of generic names that don’t inspire much — until you start noticing two big brands sold by a lot of different sellers: Sannuo and Sinocare. And as I noted in the previous post, the meter was branded Sannuo but had Sinocare names all around — turns out the former is just a brand of the latter.

I also started getting a funny feeling of understanding about the miniUSB plug that was present on the Sannuo meter: most if not all of the generic branded meters had a similar plug. But this was named “code” port. And indeed a few of the Sannuo/Sinocare models had enough explanation in English explained how this works.

Coding of testing strips is something that European and North American (at least) meters used to need in the past. You would get your packet of strips and there would be a number on the outside (the “code”), which you would select on the meter either before or right after fitting the strip inside. The code carried informations about the reactiveness of the strip and was needed to get an accurate reading. Over time, this practice has fallen out of favour with “code-less” strips becoming the norm. In particular in Europe it seems like the old style of coded strips is not even marketed anymore due to regulation.

The Sannuo meter I bought in Shanghai came with two bottles of “codeless” strips, but other strip bottles you can find on AliExpress appear to still be coded. Except instead of a two-digits “code”, they come with a “code chip”, which is pretty much a miniUSB plug connected to something. Which is why they plug is electrically active, but not making any sense when it comes to USB protocol. I have no idea how this idea came though, but I somehow suspect it has to do with miniUSB plugs being really cheap now as nobody want to deal with them.

So back to the latest glucometer I received. I chose this particular model of Sinocare because it had one feature I have never seen on any other meter: a test for uric acid. Now, I have no clue what this is meant to be, and I don’t even pretend I would understand its readings, but it sounded like something I could have at least some fun with. As it turns out this also plays to my idea of figuring out how that coding system works, as despite not needing codes for glucose results, you do need it for the uric acid strips!

The Safe AQ is just as “clunky” as I felt the previous Sannuo to be. And now I have a bit more of an idea of why: they are actually fairly easy to assemble. Unlike the press-fit of most of the “western” designs (terrible name, but it conveys the effect, so please excuse me on this one), these meters are very easy to open up. They even seem to be able to be fixed, if for instance the display was to break. I’m seriously surprised about this. The inside boards are fairly well labelled, too. And very similar between these two otherwise fairly different models. Both meters expose test points for pogo pins under the battery compartment, which are actually properly labelled as well.

Another similarity with the Sannuo, is that this meter – like nearly every meter I could see on AliExpress! – has a spring-loaded ejector for the strips. The box says “automatic” but it just means that you don’t have to touch the strip to throw it away. My best guess is that there’s some cultural significance to this, maybe it’s more common for people to to test someone else’s blood in China, and so the ejector is seen as a big improvement. Or maybe there’s an even stronger disgust with bloodied objects, and the ejector makes things cleaner. I don’t know — if anyone does, please let me know.

Now, how does this work out as a meter? My impression is fairly good overall, but the UX leaves a lot to be desired. The screen is very readable, although not backlit. The accuracy is fairly good when compared with my Libre, both with blood samples and the sensor. But figuring out how to turn it on, and how to change the date/time took a few tries. There’s no feedback that you need to keep pressed the on button for five seconds. But on the other hand the manual has fairly intelligible English, which is probably better than some of the stuff you buy directly on Amazon UK.

There’s a kicker in this whole story of course. Who is Sinocare? You would say that, since I’m always complaining about allowing usage of devices outside their spec, it would be out of character for me to make it easy to find and buy a glucometer that most likely has not been vouched by the local pharmaceutical regulations. And yet I seem to be praising a meter that I got pretty much randomly off AliExpress.

What convinced me to order and write a review about the Sinocare is that, while researching the names I found on AliExpress, I found something very interesting. Sinocare is a Chinese diagnostics company and probably the largest in the country. But they also own Trividia Health, a diagnostic company based in Florida with an UK subsidiary.

Trividia Health was a name that I already knew — they make glucometers called True Metrix, which you can find in USA at Walmart and CVS, and, as I found out more recently, in the UK at Boots. The True Metrix meters don’t seem to share anything, design-wise, with the Sinocare products, but you would expect that the two technology sets are not particularly different.

This also reminds me I need to see if Trividia sells the cradle for the True Metrix Air in the UK. I kept forgetting to try getting one in time to pick it up in the USA during a work trip, and not expecting to be there any time soon sounds like I should try to get one here.

Fake candles, and flame algorithms

The Birch Books LEGO set that I have been modifying has an interesting “fireplace” at the first floor of the townhouse. I have been wanting to wire that up to light up for the evening scenes in my smart lighting board, but I want it to look at least a bit realistic. But how do you do that?

As I previously noted, there’s flame effect LED lamps out there, which were looked at by both bigclive and Adam Savage, this very year. But those are way too big for the scale we’re talking about here. Simplifying this too much, you can think of those lamps as round, monochrome LED panels showing a flame animation, like an old DOS demo. Instead what I have to work with is going to be at most two LEDs — or at least two independent channels for the LEDs.

Thankfully, I didn’t have to look far to find something to learn from. A few months ago a friend of my wife gave us as a present a very cute candle holder, but since we’re renting, that’s not really a good idea. Instead I turned on Amazon (but AliExpress would have done the trick just as well) for some fake candles (LED Candles) that would do the trick. These are interesting because they are not just shaped like a candle, but they have a flickering light like one as well. Three of them in the holder fit fairly nicely and did the trick to give a bit of an atmosphere to our bedroom.

I was full ready to sacrifice one of the candles to reverse engineer it, but the base comes off non-destructively, and that the board inside is very easy to follow. Indeed, you can see the schematic of the board here on the right (I omitted the on/off switch for clarity), even though the board itself has space for more components. The whole of the candle is controlled by a microcontroller, with a PIC12F-compatible pinout (but as Hector pointed out, much more likely to be some random chinese microcontroller instead).

It’s interesting to note that the LED starts in “candle” mode once turning the switch to the “on” position, without using the remote control. My guess is that if you buy one of the versions that does not come with a remote control, you can add that functionality by just soldering a TSOP381x decoder. It also shows why the battery on these things don’t really last as long as you may want it to, despite using the bigger, rarer and more expensive CR2450 batteries. The microcontroller is powered up all the time, waiting to decode some signal from the remote control, even if the LED is off. I wanted to record the current flowing through in standby, but it’s fairly hard to get the battery in series with the multimeter — maybe I should invest on a bench supply for this kind of work.

So how does this all work? The LED turns out to be a perfectly normal warm white LED, with a domed form factor that fits nicely in the carved space in the fake candle itself, and that helps it diffuse it. To produce the flame effect, the microcontroller uses PWM (pulse-width modulation) — which is a pretty common way to modulate intensity of LEDs, and the way most RGB LEDs work to produce combined colours, just like on my insulin reminder. Varying the duty cycle (the ratio between “high” and “low” of the digital line) allows changing the intensity of the light (or of the specific colour for RGB ones). If you keep varying the duty cycle, you get a varying intensity that simulates a flame.

The screenshot you can see is from Saleae Logic software. It shows the variable duty cycle in span of a few seconds, and it’s pretty much unreadable. It’s possible that I can write code for a decoder in the Saleae logic, and export the actual waveform it uses to simulate the flickering of a flame — but honestly that sounds a lot of unjustified work: there’s not really “one” true flame algorithm, as long as the flickering looks the part, it’s going to be fine.

Example of a generated Perlin noise waveform for the LED flickering

So, how do you generate the right waveform? Well, I had a very vague idea of how when I started, but thanks to the awesome people in the Adafruit Discord (shout out to IoTPanic and OrangeworksDesign!) I found quite a bit of information to go by — while there are more “proper” way to simulate a fire, Perlin noise is a very good starting point for it. And what do you know? There’s a Python package for it which happens to be maintained by a friend of mine!

Now there’s a bit of an issue on how to properly output the waveform in PWM — in particular its frequency and resolution. I pretty much just thrown something at the wall, it works, and I’ll refine it later if needed, but the result is acceptable enough for what I have in mind, at least when it comes to just the waveform simulation.

The code I thrown at the wall for this is only going to be able to do the flickering. It doesn’t allow for much control and pretty much expects full control of the execution — almost the same as in the microcontroller of the original board, that literally turns off the moment the IR decoder receives (or thinks it’s receiving) a signal.

I was originally planning to implement this on the Adafruit M4 Express with PWMAudioOut — it’s not audio per-se, but it’s pretty much the same thing. But unfortunately it looks like the module needed for this is not actually built into the specific version of CircuitPython for that Feather. But now we’re in the business of productionizing code, rather than figuring out how to implement it.

Don’t Ignore Windows 10 as a Development Platform for FLOSS

Important Preface: This blog post was written originally on 2020-05-12, and scheduled for later publication, inspired by this short Twitter thread. As such it well predates Microsoft’s announcement of expanding support of WSL2 to graphical apps. I considered trashing, or seriously re-editing the blog post in the light of the announcement, but I honestly lack the energy to do that now. It left a bad taste in my mouth to know that it will likely get drowned out in the noise of the new WSL2 features announcement.

Given the topic of this post I guess I need to add a preface to point out my “FLOSS creds” — because I have seen already too many attacks to people who even use Windows at all. I have been an opensource developer for over fifteen years now, and part of the reason why I left my last bubble was because it made it difficult for me to contribute to various opensource projects. I say this because I’m clearly a supporter of Free Software and Open Source, wherever possible. I also think that’s different people have different needs, and that ignoring that is a failure of the FLOSS movement as a whole.

The “Year of Linux on the Desktop” is now a meme that has been running its course to the point of being annoying. Despite what FLOSS advocates keep saying, “Linux on the Desktop” is not really moving, and while I do have some strong opinions on this, that’s for another day. Most users, and in particular newcomers to FLOSS (both as users and developers) are probably using a more “user friendly” platform — if you leave a comment with the joke on UNIX being selective with its friends, you’ll end up on a plonkfile, be warned.

About ten years ago, it seemed like the trend was for FLOSS developers to use MacBooks as their daily laptops. I did that for a while myself — an UNIX-based platform with all the tools of the trade, which allowed quite a bit of work being done without having access to a Linux platform. SSH, Emacs, GCC, Ruby, and so on. And at the same time, you had the stability of Mac OS X, with the battery life and all the hardware worked great out of the box. But then more recently, Apple’s move towards “walled gardens” seemed to be taking away from this feasibility.

But back to the main topic. Over the past many years, I’ve been using a “mixed setup” — using a Linux laptop (or more recently desktop) for development, and a Windows (7, then 10) desktop for playing games, editing photos, designing PCBs, and for logic analysis. The latter is because Saleae Logic takes a significant amount of RAM when analysing high-frequency signals, and I have been giving my gamestations as much RAM as I can just for Lightroom, so it makes sense to run it on the machine with 128GB of RAM.

But more recently I have been exploring the ability of using Windows 10 as a development platform. In part because my wife has been learning Python, and since also learning a new operating system and paradigm at the same time would have been a bloody mess, she’s doing so on Windows 10 using Visual Studio Code and Python 3 as distributed through the Microsoft Store. While helping her, I had exposure to Windows as a Python development platform, so I gave it a try when working on my hack to rename PDF files, which turned out to be quite okay for a relatively simple workflow. And the work on the Python extension keeps making it more and more interesting — I’m not afraid to say that Visual Studio Code is better integrated with Python than Emacs, and I’m a long-time user of Emacs!

In the last week I have actually stepped up further how much development I’m doing on Windows 10 itself. I have been using HyperV virtual machines for Ghidra, to make use of the bigger screen (although admittedly I’m just using RDP to connect to the VM so it doesn’t really matter that much where it’s running), and in my last dive into the Libre 2 code I felt the need to have a fast and responsive editor to go through executing part of the disassembled code to figure out what it’s trying to do — so once again, Visual Studio Code to the rescue.

Indeed, Windows 10 now comes with an SSH client, and Visual Studio Code integrates very well with it, which meant I could just edit the files saved in the virtual machine and have the IDE also build them with GCC and executing them to get myself an answer.

Then while I was trying to use packetdiag to prepare some diagrams (for a future post on the Libre 2 again), I found myself wondering how to share files between computers (to use the bigger screen for drawing)… until I realised I could just install the Python module on Windows, and do all the work there. Except for needing sed to remove an incorrect field generated in the SVG. At which point I just opened my Debian shell running in WSL, and edited the files without having to share them with anything. Uh, score?

So I have been wondering, what’s really stopping me from giving up my Linux workstation for most of the time? Well, there’s hardware access — glucometerutils wouldn’t really work on WSL unless Microsoft is planning a significant amount of compatibility interfaces to be integrated. Similar for using hardware SSH tokens — despite PC/SC being a Windows technology to begin with. Screen and tabulated shells are definitely easier to run on Linux right now, but I’ve seen tweets about modern terminals being developed by Microsoft and even released FLOSS!

Ironically, I think it’s editing this blog that is the most miserable experience for me on Windows. And not just because of the different keyboard (as I share the gamestation with my wife, the keyboard is physically a UK keyboard — even though I type US International), but also because I miss my compose key. You may have noticed already that this post is full of em-dashes and en-dashes. Yes, I have been told about WinCompose, but last time I tried using it, it didn’t work and even screwed up my keyboard altogether. I’m now trying it again, at least on one of my computers, and if it doesn’t explode in my face again, I may just give it another try later.

And of course it’s probably still not as easy to set up a build environment for things like unpaper (although at that point, you can definitely run it in WSL!), or to have a development environment for actual Windows applications. But this is all a matter of different set of compromises.

Honestly speaking, it’s very possible that I could survive with a Windows 10 laptop for my on-the-go opensource work, rather than the Linux one I’ve been using. With the added benefit of being able to play Settlers 3 without having to jump through all the hoops from the last time I tried. Which is why I decided that the pandemic lockdown is the perfect time to try this out, as I barely use my Linux laptop anyway, since I have a working Linux workstation all the time. I have indeed reinstalled my Dell XPS 9360 with Windows 10 Pro, and installed both a whole set of development tools (Visual Studio Code, Mu Editor, Git, …) and a bunch of “simple” games (Settlers, Caesar 3, Pharaoh, Age of Empires II HD); Discord ended up in the middle of both, since it’s actually what I use to interact with the Adafruit folks.

This doesn’t mean I’ll give up on Linux as an operating system — but I’m a strong supporter of “software biodiversity”, so the same way I try to keep my software working on FreeBSD, I don’t see why it shouldn’t work on Windows. And in particular, I always found that providing FLOSS software on Windows a great way to introduce new users to the concept of FLOSS — focusing more on providing FLOSS development tools means giving an even bigger chance for people to build more FLOSS tools.

So is everything ready and working fine? Far from it. There’s a lot of rough edges that I found myself, which is why I’m experimenting with developing more on Windows 10, to see what can be improved. For instance, I know that the reuse-tool has some rough edges with encoding of input arguments, since PowerShell appears to still not default to UTF-8. And I failed to use pre-commit for one of my projects — although I have not taken notice yet much of what failed, to start fixing it.

Another rough edge is in documentation. Too much of it assumes only a UNIX environment, and a lot of it, if it has any support for Windows documentation at all, assumes “old school” batch files are in use (for instance for Python virtualenv support), rather than the more modern PowerShell. This is not new — a lot of times modern documentation is only valid on bash, and if you were to use an older operating system such as Solaris you would find yourself lost with the tcsh differences. You can probably see similar concerns back in the days when bash was not standard, and maybe we’ll have to go back to that kind of deal. Or maybe we’ll end up with some “standardization” of documentation that can be translated between different shells. Who knows.

But to wrap this up, I want to give a heads’ up to all my fellow FLOSS developers that Windows 10 shouldn’t be underestimated as a development platform. And that if they intend to be widely open to contributions, they should probably give a thought of how their code works on Windows. I know I’ll have to keep this in mind for my future.

Upcoming electronics projects (and posts)

Because of a strange alignment between my decision to leave Google to find a new challenge, and the pandemic causing a lockdown of most countries (including the UK, where I live), you might have noticed more activity on this blog. Indeed for the past two months I maintained an almost perfect record of three posts a week, up from the occasional post I have written in the past few years. In part this was achieved by sticking to a “programme schedule” — I started posted on Mondays about my art project – which then expanded into the insulin reminder – then on Thursday I had a rotating tech post, finishing the week up with sARTSurdays.

This week it’s a bit disruptive because while I do have topics to fill in the Monday schedule, they start being a bit more scatterbrained, so I want to give a bit of a regroup, and gauge what’s the interest around them in the first place. As a starting point, the topic for Mondays is likely going to stay electronics — to follow up from the 8051 usage on the Birch Books, and the Feather notification light.

As I have previously suggested on Twitter, I plan on controlling my Kodi HTPC with a vintage, late ’80s Sony SVHS remote control. Just for the craic, because I picked it up out of nostalgia, when I went to Weird Stuff a few years ago — I’m sad it’s closed now, but thankful to Mike for having brought me there the first time. The original intention was to figure out how the complicated VCR recording timer configuration worked ­— but not unexpectedly the LCD panel is not working right and that might not be feasible. I might have to do a bit more work and open it up, and that probably will be a blog post by itself.

Speaking of Sony, remotes and electronics — I’m also trying to get something else to work. I have a Sony TV connected to an HDMI switcher, and sometimes it get stuck with the ARC not initializing properly. Fixing it is relatively straightforward (just disable and re-enable the ARC) but it takes a few remote control button presses… so I’m actually trying to use an Adafruit Feather to transmit the right sequence of infrared commands as a macro to fix that. Which is why I started working on pysirc. There’s a bit more than that to be quite honest, as I would like to have a single-click selection of inputs with multiple switchers, but again that’s going to be a post by itself.

Then there’s some trimming work for the Birch Books art project. The PCBs are not here yet, so I have no idea if I have to respin them yet. If so, expects a mistakes-and-lessons post about it. I also will likely spend some more time figuring out how to make the board design more “proper” if possible. I also still want to sit down and see how I can get the same actuator board to work with the Feather M0 — because I’ll be honest and say that CircuitPython is much more enjoyable to work with than nearly-C as received by SDCC.

Also, while the actuator board supports it, I have currently left off turning on the fireplace lights for Birch Books. I’m of two minds about this — I know there are some flame effect single-LEDs out there, but they don’t appear to be easy to procure. Both bigclive and Adam Savage have shown flame-effect LED bulbs but they don’t really work in the small scale.

There are cheap fake-candle LED lamps out there – I saw them the first time in Italy at the one local pub that I enjoy going to (they serve so many varieties of tea!), and I actually have a few of them at home – but how they work is by using PWM on a normal LED (usually a warm light one). So what I’m planning on doing is diving into how those candles do that, and see if I can replicate the same feat on either the 8051 or the Feather.

I don’t know when the ESP32 boards I ordered will arrive, but probably will spend some time playing with those and talking about it then. It would be nice to have an easy way to “swap out the brains” of my various projects, and compare how to do things between them.

And I’m sure that, given the direction this is going, I’ll have enough stuff to keep myself entertained outside of work for the remaining of the lockdown.

Oh, before I forget — turns out that I’m now hanging out on Discord. Adafruit has a server, which seems to be a very easygoing and welcoming way to interact with the CircuitPython development team, as well as discussing options and showing off. If you happen to know of welcoming and interesting Discord servers I might be interested in, feel free to let me know.

I have not forgotten about the various glucometers I acquired in the past few months and that I still have not reversed. There will be more posts about glucometers, but for those I’m using the Thursday slot, as I have not once gone down to physically tapping into them yet. So unless my other electronics projects starve out that’s going to continue that way.

It’s all in the schema

When it comes to file formats, I’m old school, and I probably prefer XML over something like YAML. I also had multiple discussions with people over the years, that could be summarised in “If only we used $format, we wouldn’t have these problems” — despite the problems being fairly clearly a matter of schema rather than format.

These discussions didn’t even stop in the bubble, since while protocol buffers are the de-facto file format, for the whole time I worked there, there had been multiple option for expanding the format, with templates, additional languages built on top of them, and templates for those additional languages.

Schema is metadata: data describing data. And in particular, it describe how the data looks like in general terms: which fields it has, what’s the format of the fields, and what are their valid values, and so on. Don’t assume that with schema I refer to XML Schemas only! There’s a significant amount of information that is not usually captured by a schema description languages (and XML Schemas is only one of them) — things like does that epoch time represent a date in UTC or local timezone?

The reason why I don’t have have any particularly strong opinion on data formats as much as I do data schemas is that once you have a working abstracted interface for them, you don’t need to care what the format is. This is clearly easier said than done, of course. DOM and SAX are complicated enough, and the latter is so specific to XML that there is practical zero hope to reuse a design depending on it for anything but XML. And you may have a preference of one format over another for other reasons.

For example, if your configuration was stored in XML, the SAX API allows you to parse the configuration file and fill in a structure in a single pass, which may be more memory-efficient than parsing the files into key/value pairs and requesting them by string. I did something like that with other file types through Ragel, but let’s be honest, in most cases, configuration file parsing speed is not your bottleneck (except if it is and in that case you probably know how to handle that already).

The big problem for me with choosing a schema is that unless you have an easy way to expand it, you’ll find yourself stuck at some point. Just look at the amount of specifications around for complex file formats such as pcapng. Or think of the various revisions of RFCs just for HTTP/1.1 (without considering the whole upgrade to HTTP/2 and later). Committing to a schema is scary, because if you get it wrong, you’re likely going to be stuck for a long while, or you end up with the compatibility issue of changing the format every other release of whatever tool uses the format.

This is not far from what happens with office formats as well. If you look at the various formats used by Microsoft Word, they seems to change for each of the early releases, but then kind-of settled down by the time Word 97 came along, before standardizing on the OOXML format. And even in the open source world, OpenDocument took quite a while before being stable enough to be usable, but is now fairly stable.

I wish I now had an answer to give everyone about how to handle schemas and updates to them. Unfortunately, I don’t. In the bubble, the answer is not to worry too hard about the schema as long as your protocol buffer definitions are valid, because the monorepo will enforce their usage. It’s a bit of a misconception as well, since even with a single format and a nearly-globally enforced schema there can be changes that need to be propagated first, but it solves a lot of problems.

I had thought about schemas before, and I’m thinking of them again, in the context of glucometerutils because I would really like to have an easy way to export the data dumped from various meters into a format that can be processed with different software. This way, I only need to care about building the protocol tooling, and leave it to someone else who has better ideas about visualisation and analysis to build tools for that part.

My best guess right now about that is to just keep a tool that can upgrade the downloaded format from one version to the next — and make sure that there’s a library in-between the exporter and the consumer, so that as long as they both have the same library version, there’s no need to keep the two tools in sync. But I have not really written any code for that.

Insulin, routine, lockdown, and electronics

As you may know if you read this blog, I have insulin-dependent diabetes. In particular, I use both fast and long acting insulin, which basically means I need to take a shot of insulin every morning (at around the same time, but there’s at least a bit of leeway around it).

This is not usually a problem: the routine of waking up, getting ready to leave, making a coffee and either drinking it or taking it with me makes it very hard to miss the step “taking the insulin”. Unfortunately, like for many others, this routine is gone out of the window due to the current lockdown. Maybe a bit worse for me since I’m currently still between jobs, which means I don’t even have the routine of logging in to work form home, and of meetings.

What this meant, is that days blurred together, and I started wondering if I remembered to take my insulin in the morning. A few too many times that answer was “I don’t know”, and I think at least twice in the past couple of weeks I did indeed forget. I needed something to make it easier to remember and not to forget.

Because insulin injections tend to be one of those things that I do “in autopilot”, I needed something hard to forget to do. Theoretically, the Libre App allows annotating that you took long-acting insulin (and how much) but that requires me to remember to scan my sensor right after and write down that I did. It’s too easy to forget. I also wanted something that would be a lot more explicit about telling me (and my wife) that I forgot to tell my insulin. And hopefully something that I wouldn’t risk telling I took my insulin too soon in the interaction, and then not actually doing the right thing (as sometimes I reach out for my insulin pen, realise there’s not enough insulin there, and need to pick up a new one from the fridge).

The Trigger: Yes, I Took My Insulin

The first thing I decided to do was to use the spare Flic Button. I bought Flics last year upon suggestion of Luke, and they actually helped immensely — we have one in the study and one in the bedroom, to be able to turn on the smart lights quietly (in the night) and without bothering with the phone apps. We kept a third one “spare”, not quite sure what to use it for until now. The button fits on the back of the cabinet where I keep my “in use” insulin pen. And indeed, it’s extremely easy and obvious to reach while putting the pen down — as an aside, most European insulin pens fit perfectly on a Muji 3-tier slanted display, which is what I’ve been using to keep mine in.

This is not exactly the perfect trigger. The perfect trigger wouldn’t require an action outside of the measured action — so in a perfect world, I would be building something that triggers when I throw an used needle into the needle container. But since that’s a complex project for which I have no obvious solution, I’ll ignore that. I have a trigger, it doesn’t risk getting too much in my way. It should be fine.

But what should the trigger do? The first idea I had was to use IFTTT to create a Google Calendar event when I pressed the button. It wouldn’t be great for notifying if I forgot the insulin, but it would at least allow me to keep track of it. But then I had another idea. I had a spare Adafruit Feather M4 Express, including an AirLift FeatherWing coprocessor for WiFi. I originally bought it to fix an issue with my TV (which I still have not fixed), and considered using it on my art project, but it also has a big bright RGB LED on it (an AdaFruit NeoPixel), which I thought I would use for notifications.

A quick Flask app later, and I had something working — the Flic button would hit one endpoint on the web app, which would record me having taking my insulin, while the Feather would be requesting another endpoint to know how to reconfigure the LED. The webapp would have the logic to turn the LED either red or quiescent depending on whether I got my insulin for the day. There’s a bit of logic in there to define “the day”, as I don’t need the notification at 1am if I have not gone to bed yet (I did say that my routine is messed up didn’t I?) but that’s all minor stuff.

The code for the webapp (Python, Flask) and the Feather (CircuitPython) I pushed to GitHub immediately (because I can), but there’s no documentation for it yet. It also doesn’t use the NeoPixel anymore, which I’ll explain in a moment, so you may not really be able to use it out of the box as it is.

For placement, I involved my wife — I want her to be able to tell whether I didn’t take my insulin, so if I’m having a “spaced out day”, she can remind me. We settled for putting it in the kitchen, close to the kettle, so that it’s clearly visible when making coffee — something we do regularly early in the morning. It worked out well, since we already had an USB power supply in the kitchen, for the electric cheese grater I converted.

Limitations of the Feather Platform.

The Feather platform by itself turned out to be a crummy notification platform. Don’t get me wrong, the ease of using it is extremely nice. I wrote the CircuitPython logic in less than an hour, and that made it very nice. But if you need a clear LED to tell you whether something was done or not, you can’t just rely on the Feather. Or at least not on the Feather M4 Express. Yes it comes with a NeoPixel, but it also comes with two bright, surface-mount LEDs by the sides of the USB connector.

One of the two LEDs is yellow, and connected to the optional LiPo battery charging circuitry, and according to the documentation it’s expected to “flicker at times” — as it turns out, it seems to be continuously flickering for me, to the point at first I thought it was actually the RX/TX notification on the serial port. There’s also a red LED which I thought was just the “power” LED — but that is actually controlled by a GPIO on the board, except it’s pulled high (so turned on) when using the AirLift Wing.

The battery charging LED does appear to behave as documented (only at times flickering) on the M0 Express I ended up getting in addition to the M4. But since that is not compatible with the AirLift (at least using CircuitPython), it might just be that this is also a side-effect of using the AirLift.

Why am I bringing up these LEDs? Well, if you want a notification light that’s either red-or-off, having a bright always-on red LED is a bad idea. Indeed, a day after setting it up this way, my wife asked me if I took my insulin, because she saw the red light in the corner. Yeah it’s too bright — and easy to confuse for the one that you want to check out for.

My first reaction was to desolder the two LEDs — but I have hardly ever desoldered SMD components, and I seem to have fallen for the rookie mistake of trying to desolder them with a solder iron rather than a hot air gun, and actually destroyed the microUSB connector. Oops. A quick order from Mouser meant I had to wait a few days to go back playing with the Feather.

A Better, Funnier Light

This turned out to be a blessing in disguise as it forced me to take a few steps back and figure out how to make it less likely to confuse the LEDs, beside trying to glue them down with some opaque glue. So instead, I figured out that there’s plenty of RGB LED lamps out there — so why not using those? I ordered a cheap Pikachu from Amazon, which delivered about at the same time as Mouser. I knew it was probably coming from AliExpress (and, spoilers, it pretty much did — only the cardboard looked like printed for the UK market by a domestic company), but ordering it at the source would have taken too long.

The board inside turned out to be fairly well organised, and it was actually much easier to tap into it than expected — except for me forgetting how transistors work, and bridging to the wrong side of the resistor to try turning on the LEDs manually, thus shorting and burning them. I ended up having to pretty much reimplement the same transistor setup outside of the board, but if you do it carefully, you only need three GPIO lines to control the lamp.

The PCB from the RGB LED light I bought. Very well organised. If you want to pair it to a MCU to control it, remove the crossed out IC, then tap into the blue-marked connections. Pay attention to the side of the resistor you connect this on!

The LED colour can be varied by PWM, which is fairly easy to do with CircuitPython. You only need to be careful with which GPIO lines you use for it. I used “random” lines when testing on the breadboard, but then wanted to tidy it up using lines 10, 11, and 12 on the finalized board — turns out that line 10 does not appear to have timer capabilities, so you can’t actually use it for PWM. And also, lines 11 and 12 are needed for the AirLift — which means the only lines I could use for this were 5, 6 and 9.

At this point, I had to change the webapp so that instead of turning the LED off to signify I took my insulin, it would instead turn the LEDs yellow, to have a bright and happy Pikachu on the kitchen counter. And an angry red one in the morning until I take my insulin.

Of course to be able to put the lamp in the kitchen next to the kettle, I had to make sure it wouldn’t be just a bunch of boards with cables going back and forth. So first of all, I ended up wiring together a Feather Doubler — which allows a feather (and a wing) to sit side-by-side. The doubler has prototype areas in-between the connectors, which were enough to solder in the three transistors and three resistors — you won’t need those if you don’t burn out the original transistors either!

Unfortunately, because I had stacking headers on my AirLift, even with the cover off, the lamp wouldn’t sit straight. But my wife got the idea of turning the cover inside out, using the space provided by the battery compartment, which worked actually fairly good (it requires some fiddling to make it stable, and I was out of Sugru glue to build a base for it, but for now it’ll work).

Following Up: Alternative Designs and Trimmings

So now I have a working notification light. It works, it turns red in the morning, and turns back yellow after I click the button to signal I took my insulin. It needs a webapp running – and I have been running this on my desktop for now – but that’s good enough for me.

Any improvement from here is pretty much trimming, and trying something new. I might end up getting myself another AirLift and solder the simpler headers on it, to make the profile of the sandwiched board smaller. Or if I am to remake this from an original lamp with working transistors, I could just avoid the problem of the doubler — I would only need GPIO wirings, and could use the prototyping space next to the M4 Express to provide the connection.

I did order a few more lamps in different styles from AliExpress — they will take their due time to show up, and that’s okay. I’ll probably play with them a bit more. I ordered one that (probably) does not have RGB LEDs — it might be interesting to design a “gut replacement” board, that just brings in new LEDs altogether. I ordered one that has “two-colour” images, which likely just means it has two sets of LEDs. I’ll be curious to see how those look like.

I also ordered some ESP32-based “devkits” from AliExpress — this is the same CPU used in the AirLift wing as a WiFi co-processor only, but it’s generally powerful enough that it would be able to run the whole thing off a single processor. This might not be as easy as it sounds to prototype, particularly given that I’m not sure I can just provide the 5V coming from the lamp’s connector to the ESP board (ESP32 uses 3.3V), and I burnt the 3.3V regulator on the lamp’s original board. Also since I need the transistor assembly, I would have to at least get a prototype board to solder everything on and — well, it might just not work out. Still nice to have them in a drawer though.

While I don’t have a 3D printer, and I’m not personally interested in having one at home, and I’m also not going into an office, which may or may not have one (old dayjob did, new dayjob I’m not sure), I might also give a try to software to design a “replacement base” that can fit the Feather, and screw into the rest of the lamp that is already there. It might also be a starting point for designing a version that works with the ESP32 — for that one, you would need the microUSB port from the USB module rather than the one present in the lamp, to go through the on-board regulator. This one is just for the craic, as my Irish friends would say, as I don’t expect that to be needed any time soon.

All in all, I’m happy with what I ended up with. The lamp is cute, and doesn’t feel out of place. It does not need to broadcast to anyone but me and my wife what the situation is. And it turns out to be almost entirely based on Python code that I just released under MIT.

My new friend who reminds me to look after myself!

On Rake Collections and Software Engineering

autum, earth's back scratcher

Matthew posted on twitter a metaphor about rakes and software engineering – well, software development but at this point I would argue anyone arguing over these distinctions have nothing better to do, for good or bad – and I ran with it a bit by pointing out that in my previous bubble, I should have used “Rake Collector” as my job title.

Let me give a bit more context on this one. My understanding of Matthew’s metaphor is that senior developers (or senior software engineers, or senior systems engineers, and so on) are at the same time complaining that their coworkers are making mistakes (“stepping onto rakes”, also sometimes phrased as “stepping into traps”), while at the same time making their environment harder to navigate (“spreading more rakes”, also “setting up traps”).

This is not a new concept. Ex-colleague Tanya Reilly expressed a very similar idea with her “Traps and Cookies” talk:

I’m not going to repeat all of the examples of traps that Tanya has in her talk, which I thoroughly recommend for people working with computers to watch — not only developers, system administrators, or engineers. Anyone working with a computer.

Probably not even just people working with computers — Adam Savage expresses yet another similar concept in his Every Tool’s a Hammer under Sweep Up Every Day:

[…] we bought a real tree for Christmas every year […]. My job was always to put the lights on. […] I’d open the box of decorations labeled LIGHTS from the previous year and be met with an impossible tangle of twisted, knotted cords and bulbs and plugs. […] You don’t want to take the hour it’ll require to separate everything, but you know it has to be done. […]

Then one year, […] I happened to have an empty mailing tube nearby and it gave me an idea. I grabbed the end of the lights at the top of the tree, held them to the tube, then I walked around the tree over and over, turning the tube and wrapping the lights around it like a yuletide barber’s pole, until the entire six-string light snake was coiled perfectly and ready to be put back in its appointed decorations box. Then, I forgot all about it.

A year later, with the arrival of another Christmas, I pulled out all the decorations as usual, and when I opened the box of lights, I was met with the greatest surprise a tired working parent could ever wish for around the holidays: ORGANIZATION. There was my mailing tube light solution from the previous year, wrapped up neat and ready to unspool.

Adam Savage, Every Tool’s a Hammer, page 279, Sweep up every day

This is pretty much the definition of Tanya’s cookie for the future. And I have a feeling that if Adam was made aware of Tanya’s Trap concept, he would probably point at a bunch of tools with similar concepts. Actually, I have a feeling I might have heard him saying something about throwing out a tool that had some property that was opposite of what everything else in the shop did, making it dangerous. I might be wrong so don’t quote me on that, I tried looking for a quote from him on that and failed to find anything. But it is something I definitely would do among my tools.

So what about the rake collection? Well, one of the things that I’m most proud of in my seven years at that bubble, is the work I’ve done trying to reduce complexity. This took many different forms, but the main one has been removing multiple optional arguments to interfaces of libraries that would be used across the whole (language-filtered) codebase. Since I can’t give very close details of what’s that about, you’ll find the example a bit contrived, but please bear with me.

When you write libraries that are used by many, many users, and you decide that you need a new feature (or that an old feature need to be removed), you’re probably going to add a parameter to toggle the feature, and either expect the “modern” users to set it, or if you can, you do a sweep over the current users, to have them explicitly request the current behaviour, and then you change the default.

The problem with all of this, is that cleaning up after these parameters is often seen as not worth it. You changed the default, why would you care about the legacy users? Or you documented that all the new users should set the parameter to True, that should be enough, no?

That is a rake. And one that is left very much in the middle of the office floor by senior managers all the time. I have seen this particular pattern play out dozens, possibly hundreds of times, and not just at my previous job. The fact that the option is there to begin with is already increasing complexity on the library itself – and sometimes that complexity gets to be very expensive for the already over-stretched maintainers – but it’s also going to make life hard for the maintainers of the consumers of the library.

“Why does the documentation says this needs to be True? In this code my team uses it’s set to False and it works fine.” “Oh this is an optional parameter, I guess I can ignore it, since it already has a default.” *Copy-pastes from a legacy tool that is using the old code-path and nobody wanted to fix.*

As a newcomer to an environment (not just a codebase), it’s easy to step on those rakes (sometimes uttering exactly the words above), and not knowing it until it’s too late. For instance if a parameter controls whether you use a more secure interface, over an old one you don’t expect new users of. When you become more acquainted with the environment, the rakes become easier and easier to spot — and my impression is that for many newcomers, that “rake detection” is the kind of magic that puts them in awe of the senior folks.

But rake collection means going a bit further. If you can detect the rake, you can pick it up, and avoid it smashing in the face of the next person who doesn’t have that detection ability. This will likely slow you down, but an environment full of rakes slows down all the newcomers, while a mostly rake-free environment would be much more pleasant to work with. Unfortunately, that’s not something that aligns with business requirements, or with the incentives provided by management.

A slight aside here. Also on Twitter, I have seen threads going by about the fact that game development tends to be a time-to-market challenge, that leaves all the hacks around because that’s all you care about. I can assure you that the same is true for some non-game development too. Which is why “technical debt” feels like it’s rarely tackled (also on the note, Caskey Dickson has a good technical debt talk). This is the main reason why I’m talking about environments rather than codebases. My experience is with long-lived software, and libraries that existed for twice as long as I worked at my former employer, so my main environment was codebases, but that is far from the end of it.

So how do you balance the rake-collection with the velocity of needing to get work done? I don’t have a really good answer — my balancing results have been different team by team, and they often have been related to my personal sense of achievement outside of the balancing act itself. But I can at least give an idea of what I do about this.

I described this to my former colleagues as a rule of thumb of “three times” — to keep with the rake analogy, we can call it “three notches”. When I found something that annoyed me (inconsistent documentation, required parameters that made no sense, legacy options that should never be used, and so on), I would try to remember it, rather than going out of my way to fix it. The second time, I might flag it down somehow (e.g. by adding a more explicit deprecation notice, logging a warning if the legacy codepath is executed, etc.) And the third time I would just add it to my TODO list and start addressing the problem at the source, whether it would be within my remit or not.

This does not mean that it’s an universal solution. It worked for me, most of the time. Sometimes I got scolded for having spent too much time on something that had little to no bearing on my team, sometimes I got celebrated for unblocking people who have been fighting with legacy features for months if not years. I do think that it was always worth my time, though.

Unfortunately, rake-collection is rarely incentivised. The time spent cleaning up after the rakes left in the middle of the floor eats into one’s own project time, if it’s not the explicit goal of their role. And the fact that newcomers don’t step into those rakes and hurt themselves (or slow down, afraid of bumping into yet another rake) is rarely quantifiable, for managers to be made to agree to it.

What could he tell them? That twenty thousand people got bloody furious? That you could hear the arteries clanging shut all across the city? And that then they went back and took it out on their secretaries or traffic wardens or whatever, and they took it out on other people? In all kinds of vindictive little ways which, and here was the good bit, they thought up themselves. For the rest of the day. The pass-along effects were incalculable. Thousands and thousands of soul all got a faint patina of tarnish, and you hardly had to lift a finger.

But you couldn’t tell that to demons like Hastur and Ligur. Fourteenth-century minds, the lot of them. Spending years picking away at one soul. Admittedly it was craftsmanship, but you had to think differently these days. Not big, but wide. With five billion people in the world you couldn’t pick the buggers off one by one any more; you had to spread your effort. They’d never have thought up Welsh-language television, for example. Or value-added tax. Or Manchester.

Good Omens page 18.

Honestly, I often felt like Crowley: I rarely ever worked on huge, top-to-bottom cathedral projects. But I would be sweeping around a bunch of rakes, so that newcomers wouldn’t hit them, and that all of my colleagues would be able to build stuff more quickly.

The bakery is just someone else’s oven

Most of the readers of this blog are likely aware of the phrase “The Cloud is someone else’s computer” — sometimes with a “just” added to make it even more judgemental. Well, it should surprise nobody (for those who know me) that I’m not a particular fan of either the phrase, or the sentiment within it.

I think that the current Covid-19 caused quarantine is actually providing a good alternative take on it, which is the title of the blog: “The bakery is just someone else’s oven.”

A lot of people during this pandemic-caused lockdown decided that it’s a good time to make bread, whether they tried it before or not. And many had to deal with similar struggles, from being unable to find ingredients, to making mistakes while reading a recipe that lead to terrible results, to following recipes that are just giving wrong instructions because they assume something that is not spelled out explicitly (my favourite being the recipes that assume you have a static oven — turns out that the oven we have at home only has “fan assisted” and “grill” modes.)

Are we all coming out of the current crisis and deciding that home baking is the only true solution, and that using a bakery is fundamentally immoral? I don’t think so. I’m sure that there will be some extremists thinking that, there always are, but for the most part, the reasonable people will go and accept that baking bread is not easy and while freshly baked bread can taste awesome, for most people, making time to do it every day would cut heavily into time that is needed for other things, that may or may not be more important (work, childcaring, wellness, …), so once the option of just buying good (or at least acceptable) bread from someone else becomes practical, lots of us will go back to buying it most of the time, and just occasionally baking.

I think this matches very well the way Cloud and self-hosted solutions relate to each other. You can set up your own infrastructure to host websites, mail servers, containers, Dockers, apps and whatever else. But most of the time this detracts from the time you would spend on something else, or you may need resources, that might not be linear to procure, or you may make mistakes configuring them, or you may just not know what the right tools you need are. While some (most) of these points apply to using Cloud solutions from various providers, they are also of a different level — the same way you may have problems following a recipe that includes bread as an ingredient, rather than being for bread.

So for instance, if you’re a small business, you may be running your own mail server. It’s very possible that you even already have a system administrator, or retain a “MSP” (Managed Service provider), which is effectively a sysadmin-for-hire. But would it make sense for you to do that? You would have to have a server (virtual or physical) to host it, you would have to manage the security, you would have to manage connectivity, spam tracking, logging, … it’s a lot of work! If you’re not in the business of selling email servers, it’s not likely that you’d ever get a good return on that resource investment. Or you could just pay FastMail to run it for you (see my post on why I chose them at the end).

Of course saying something like this will get people to comment on all kind of corner cases, or risks connected with it. So let’s be sure that I’m not suggesting that this is the only solution. There’s cases in which, even though it is not you primary business, running your own email server has significant advantages, including security. There are threat models and threat models. I think this is once again matching the comparison with bakeries — there are people who can’t buy bread, because the risk of whichever bread they buy to have been contaminated with something they are allergic to is not worth it.

If you think about the resource requirements, there’s another similar situation there: getting all the ingredients in normal situation is very easy, so why bother buying bread? Well, it turns out that the way supply works is not obvious, and definitely not linear. At least in the UK, there was an acknowledgement that “only around 4% of UK flour is sold through shops and supermarket”, and that the problem is not just the increase in demand, but also the significant change in usage pattern.

So while it would be “easy” and “cheap” to host your app on your own servers in normal times, there’s situations in which this is either not possible, or significantly inconvenient, or expensive. What happens when your website is flooded by requests, and your mail server sharing the same bandwidth is unreachable? What happens when a common network zero-day is being exploited, and you need to defend against it as quickly as reasonably possible?

Pricing is another aspect that appears to match fairly well between the two concepts: I heard so many complains about Cloud being so expensive that it will bankrupt you — and they are not entirely wrong. It’s very easy to think of your company as so big as to need huge complex solutions, that will cost a lot more than a perfectly fine solution, either on Cloud or self-hosted. But the price difference shouldn’t be accounted for solely by comparing the price paid for the hosting, versus the price paid for the Cloud products — you need to account for a lot more costs, related to management and resources.

Managing servers take time and energy, and it increases risks, which is why a lot of business managers tend to prefer outsourcing that to a cloud provider. It’s pretty much always the case, anyway, that some of the services are outsourced. For instance, even the people insisting on self-hosting solutions don’t usually go as far as “go and rent your own cabinet at a co-location facility!” which would still be a step short of running your own fiber optic straight from the Internet exchange, and… well you get my point. In the same spirit, the price of bread does not only include the price of the ingredients but also the time that is spent to prepare it, knead it, bake it, and so on. And similarly, even the people who I have heard arguing that baking every day is better than buying store bread don’t seem to suggest you should go and mill your own flour.

What about the recipes? Well, I’m sure you can find a lot of recipes in old cookbooks with ingredients that are nowadays understood as unhealthy — and not in the “healthy cooking” kind of way only, either. The same way as you can definitely find old tutorials that provide bad advice, while a professional baker would know how to do this correctly.

At the end of the day, I’m sure I’m not going to change the mind of anyone, but I just wanted to have something to point people at the next time they try the “no cloud” talk on me. And a catchy phrase to answer them by.

Birch Books: 8051 and Yak Shaving

I have previously discussed my choice of splitting the actuator board, pointing out I’ll probably try designing an alternative controller board using something like the Adafruit Feather M4 and writing the firmware with CircuitPython. Part of the reason for that is that it’s just easier, but part of it is because 8051 is an annoying platform to work with.

There are a few different compilers for this platform, but as far as I know, the only open-source and maintained one is SDCC the Small Device C Compiler. I hadn’t used this in forever, but I was very happy to see a new release this year, including C2X work in progress, and C11 (mostly) supported, so I was in high spirits when I started working on this.

A Worrying Demonstration

I started from a demo that was supposed to be written explicitly for the STC89. The first thing I noted was that the code does not actually match the documentation in the same page, it references a _sdcc_external_startup() function that is not actually defined. On the other hand it does not seem to be required. There’s other issues with the code, and for something that is designed to work with the STC89, it seems to be overly complicated. Let me try to dissect the problems.

First of all, the source code is manually declaring the “Special Feature Registers” (SFR) for the device. In this case I don’t really understand the point, since all of the declared registers are part of the base 8051 architecture, and would be already declared by any of the model-specific header files that SDCC provides. While the STC89 does have a number of registers that are not found otherwise, none of those are used here. In my code I ended up importing at89x52.h, which is meant for the Atmel (now Microchip) AT89 series, which is the closest header I found for the STC89. I have since filed a patch with a header written based on other headers and the datasheet.

Side note: the datasheet is impressive in the matter of detail. It includes everything you may want to know, including the full ISA description, and a number of example cases.

Once you have the proper definition of headers, you can also avoid a lot of binary flag logic — the most important registers on the 8051 chips are bit-addressable, and so you don’t need to remember how many bits you need to shift around for you to set the correct flag to enable interrupts. And while you may be worrying that using the bit-addressed register would be slower: no, as long as you’re changing fewer than three bits on a register at a time, setting them with the bit-addressed variant is the same or faster. In the case of this demo, the original code uses two orl instructions, each taking 2 cycles, to set three bits total — using the setb instruction, it’s only going to take 3 cycles.

Once you use the correct header (either my contributed stc89c51rc.h, the at89x52.h, or even the very generic 8052.h), you have access to other older-than-thirty-years features that weren’t part of the original 8051, but were part of the subsequent 8052, which both the STC89 and AT89 series derive off. One of these features, as even Wikipedia knows, is a third 16-bit timer. This is important to the demo, since it’s effectively just an example of setting up a timer to “[set] up and using an accurate timer”.

Indeed, the code is fairly complicated, as it configures the timer both in main() and in the interrupt handler clockinc(). The reason for that is that Timer 0 is configured in “Mode 0”: the timer register is configured as 13-bit (with the word TH0, TL0), its rollover causes an interrupt, but you need to reload the timer afterwards. The reason for that is that you need more than 8 bit to set the timer to fire at 1kHz (once every millisecond), and while Timer 0 supports “automatic reload”, it only supports 8-bit reload values — since it’s using TH0 for the reload value.

8052 derivative support a third timer (Timer 2), which is 16-bit, rather than 8- or 13-bit. And it supports auto-reload at 16-bit through RCAP2H, RCAP2L. The only other complication is that unlike Timer 0 and Timer 1, you need to manually “disarm” the interrupt flag (TF2), but that’s still a lot less code.

I found the right way to solve this problem on Google Books, on a book that does not appear to have an ebook edition, and that does not seem to be in print at all. The end result is the following, modified demo.

// Source code under CC0 1.0
#include <stdbool.h>
#include <mcs51/8052.h>

volatile unsigned long int clocktime;
volatile bool clockupdate;

void clockinc(void) __interrupt(5)
{
        TF2 = 0;  // disarm interrupt flag.
	clocktime++;
	clockupdate = true;
}

unsigned long int clock(void)
{
	unsigned long int ctmp;

	do
	{
		clockupdate = false;
		ctmp = clocktime;
	} while (clockupdate);
	
	return(ctmp);
}

void main(void)
{
	// Configure timer for 11.0592 Mhz default SYSCLK
	// 1000 ticks per second
	TH2 = (65536 - 922) >> 8;
	TL2 = (65536 - 922) & 0xFF;
        RCAP2H = (65536 - 922) >> 8;
        RCAP2L = (65536 - 922) & 0xFF;
	
        TF2 = 0;
        ET2 = 1;
        EA = 1;
        TR2 = 1; // Start timer

	for(;;)
		P3 = ~(clock() / 1000) & 0x03;
}

I can only expect that this demo was just written long enough ago that the author forgot to update it, because… the author is an SDCC developer, and refers to his own papers working on it at the bottom of the demo.

A Very Conservative Compiler

Speaking of the compiler itself, I had no idea of what a mess I would get myself into by using it. Turns out that despite the fact that this is de-facto the only opensource embedded compiler people can use for the 8051, it is not a very good compiler.

I don’t say that to drag down the development team, who are probably trying to wrestle a very complex problem space (the 8051’s age make its quirk understandable, but irritating — and the fact that there’s more derivatives than there’s people working on them, is not making it any better), but rather because it is missing so much.

As Philipp describes it, SDCC “has a relative conservative architecture” — I would say that it’s a very conservative architecture, given that even some optimisations that, as far as I can tell, are completely safe are being skipped. For example, doing var % 2 (which I was using to alternate between two test patterns on my LEDs) was generating code calling into a function implementing integer modulo, despite being equivalent to var & 1, which is implemented in the basic instructions.

Similarly, the compiler does not optimise division by powers-of-two ­— which means that for anything that is not a build-time constant you’re better off using bitwise operations rather than divisions — it’s another thing that I addressed in the demo above, even though there it does not matter, as the value is constant at build time.

Speaking of build-time constants — turns out that SDCC does not do constant propagation at all. Even when you define something static const, and never take its address, it’s emitted in the data section of the output program, rather than being replaced at build time where it’s used. Together with the lack of optimisation noted above, it meant I gave up on my idea of structuring the firmware in easily-swappable components — those would rely on the ability of the compiler to do optimisation passes such as constant propagation and inlining, but we’re talking about the lack of much lower level optimisation now.

Originally, this blog post also wanted to touch on the fact that the one library of 8051 interfaces I found hasn’t been touched in six years, has still a few failed merge markers, and not even parsing with modern SDCC — but then again, now that I know SDCC does not optimise even the most basic of operations, I don’t think using a library like that is a good idea — the IO module there is extremely complicated, considering that most ports’ I/O lines can be accessed with bit-addressed registers.

Now, as Andrea (Insomniac) pointed out, Philipp also has a document on using LLVM with SDCC — but the source code this is referencing is more than five years old, and relies on the LLVM C backend, which means it’s generating C code for SDCC to continue compiling. I do wonder if it would make sense to have instead a proper LLVM target for 8051 code — it’s beyond the amount of work I want to put on this project, but last year they merged AVR support into LLVM, which allows to use (or at least try) Rust on 8-bit controllers already. It would be interesting to see if 8051 cores could be used with something different than C (or manually written assembly).

You could wonder why am I caring this much for a side project MCU that is quite older than me. The thing is I don’t, really. I just seem to keep bumping around 8051/2 in various places. I nearly wrote a disassembler for it to hack at my laptop’s keyboard layout a few years ago. I still feel bad I didn’t complete that project. 8051 is still an extremely common micro in low-power applications, and the STC89 in particular is possibly the cheapest micro you can set up prototypes at home: you can get 20 of them for less than 60p each from AliExpress, if you have the time to wait — I know, I just ordered a lot, just to have them around if I decide to do more with them now that I sort-of understand them. the manufacturer appears to make many multiple variants of them still, and I would be extremely surprised if you didn’t have a bunch of these throughout your home, in computers, dishwashers, washing machines, mice, and other devices that just need some cheap and cheerful logic controller without breaking the bank. Heck, I expect them to be used in glucometers, too!

With all these devices tied to closed-source, proprietary compilers, I would feel more comfortable if there was some active work on supporting a modern compiler platform in the open source world as well. From my point of view, this sounds like the needs of the industrial users, and those of the hobbyist community, diverged very much on this topic.

Sum It All Up

So for my art project I decided that even SDCC is good enough, but I wanted to make sure I would not end up with broken code (which appears to happen fairly often), so I ended up reading the generated assembly code to make sure it made sense. Despite not being particularly familiar with 8051 ISA, thanks to the Wikipedia article and the detailed datasheet from STC, it wasn’t too hard to read through it.

While I was going through it, I also figured out how to rewrite parts of the C code to force SDCC to emit some decent code. For instance, instead of a branch that either adds 1 or 32 to a counter, I was better off making a temporary variable hold 1, or change it to 32, add that variable. The fact that SDCC couldn’t optimise that made me sad, but again it’s understandable given the priorities.

Hopefully I have kept the source code still fairly readable. You can check the history to see all the various things I kept changing to make it more readable in assembly as well. Part of the changes meant changing some of my plans. In my first notes I wanted to run through 20 “hours” configurations in 60 minutes — but to optimise the code I decided that it’ll run 16 “hours” in just over 68 minutes. That way I could use a lot of power-of-twos and do away with annoying calculations.

REUSE: Simplifying Code Licensing

I have recently written how licensing is one of the important things to make it easy to contribute code. While I was preparing that blog post, I was also asking Matija if he knew of anything that would validate the presence of license and copyright information in new files. This is because in the past I might have forgotten it myself, and I have definitely merged a pull request or two in which a new contributor forgot to add the headers to the new files — I’m not blaming them, I’m not even blaming myself, I blame the fact that nothing stopped either us!

And indeed, Matija pointed at REUSE, which is a project by FSFE (which I still support, because they are positive!), and in particular at the reuse-tool, which includes a linter, which will ensure that every file in a repository is properly tagged with a license, either inline or (if not possible) through an explicit .license file. I love the idea.

The tool is still a bit rough around the edges, and for instance (because of the spec) it does not have any provision to ignore 0-sized files (or symlinks, as it turns out). Hopefully that can be fixed in the spec and the tool soon. When I started using it, it also didn’t know how to deal with any file that is there to support Autotools, which was clearly something I needed to fix, but that’s a minor issue — clearly the tool has focused on the stuff people care the most about, and Autotools projects are definitely going out of fashion, for good or bad.

I’ve now used reuse-tool to add proper licensing to all the files in most of the repositories that I’ve been actively working on. I say most — I have not touched usbmon-tools yet, because for that one I need to pay more attention, as the copyright not not fully mine. Which means that most likely even the silly configuration files that are unlikely to be copyrightable will have to be licensed under Apache 2.0.

Speaking of the configuration files — the FAQ I linked above suggests using CC0-1.0 license for them. I originally followed that and it took me a moment to remember that I should not do that. The reason is once again found in the previous post: CC0-1.0 is not an OSI-approved license, and that makes it impossible for some people (starting with my ex-colleagues at Google and other Alphabet companies) to contribute to the software, even if it’s just fixing my Travis CI configuration. Instead I selected MIT — which is pretty much equivalent in practice, even though not in theory. Update 2020-05-14: there’s some discussion of alternative recommendations going on right now. Considering that, I have changed my mind and will use Unlicense for configuration files, for the foreseeable future. As I said in the other post, Fedora prefers CC0-1.0, but it does not seem to be outright banned by any organization or project.

I did that for a number of my projects, including those under the python-scsi organization, and included it into a pending pull request I had already open for cachecontrol. Not all of them pass the linter clean yet, because of the 0-sized file issue I noted above. I also didn’t set them up to use pre-commit (despite Carmen adding support for it very quickly), because of that. But at least it’s a step in the right direction, for sure.

Speaking of pre-commit — one of the reasons why I wanted to have this is to make sure that the .license files are not left uncommitted. With the pre-commit check in place, the lint needs to pass for the staged changes rather than for the checked out tree. Once again, yay for automation.

I have to say that this push from FSFE — particularly because I have found myself unable to contribute to one of their projects before, because of missing licensing information on the repository. I also like the fact that they do care about getting people to use this, rather than making a purity tool for the sake of purity, which I’ve seen happening in other organisations. Again, score one for FSFE.

So if you have an open source project, and you want to make sure it’s easy for everyone, including those who may be working for big corporations, to contribute back, give a try to just setting this up with the tool. It should reduce significantly the cost of contribution, and even that of adoption.