Clothing & Games Spotlight: TeeTurtle & Unstable Unicorns

— What do you do at the office?
— I keep it weird.

I have already written about Genki Gear, which is probably the “uniform supplier” for British Isles’ geeks. They are clearly not alone — and I thought I would give a shout out to another geeky supplier.

TeeTurtle makes clothes, slippers, plushies, underwear, stickers, … with some of the geekiest art ever. You can find on their website original art, Disney and Marvel, Star Wars or, if that’s your cup of tea (it’s not mine), Rick & Morty.

And they have probably my favourite filter selection for an art clothing store: by animals! Because whether you’re into puppies, cats, foxes, pandas, bunnies, dragons, … everybody needs their favourite tshirt.

And, let’s not forget, Unicorns!

Indeed, these are the same people behind Unstable Unicorns and a bunch of other awesome board and card games. If you have not had a chance to take a look at those games, do now. Unstable Unicorns is one of our favourite party games, together with Exploding Kittens.

If you’re locked in with your significant others, or with other housemates, you may want to give it a try… but just remember the advice on the box: Unicorns are your friends now!

Programming Languages are Tools

Recently, I shared a long Twitter Thread while I went through a bunch of GitHub repositories that I officially archived — they are still all available of course, but I’m not caring to take issues, or pull request; if you happen to bump into something you would care to use, feel free to fork and use it.

As these projects come from many different “eras” of my career, they all are different in style, license, and programming language used. I already have written before about my current take in licensing, so I don’t think there’s any need for me to go through more of it right now. But I didn’t really spend much time talking about languages on the blog, so I thought I would at least share my point of view with this.

I started programming with BASIC, on C64, GW-BASIC, QBasic, and eventually Visual Basic 5 CCE. I went on to learn C++ for high school, I tried (and failed) learning Pascal. I learnt PHP because that was what seemed cool, and I couldn’t get into Java at first at all. I learnt to appreciate “good old C”, and Shell. I failed at my first two attempts to get into Python, ended up picking up Ruby, and eventually C#. For a job I ended up digging deep into ActionScript (Flash’s programming language), and some proprietary language that was attached to the RDBMS that they had been using. To make my life easier, I learnt Perl (in 2012!) to extend Munin. Then I arrived at the Bubble with no knowledge of Python and was thrown in possibly the most Python-heavy team in Dublin at the time.

I like Python, but I also liked most of the other languages I worked with. But I look at languages like tools, and sometimes one tool is better than another because of the features, sometimes because it’s the one you have the most muscle memory with, and sometimes because it’s the one that is already in front of you.

As I write this post I have not started my next dayjob, and I have no idea what programming language I’ll end up using day to day. It might even be JavaScript, which I have next to no experience with. I’m not going to be picky — as long as it’s not a functional programming language, it’s just going to be a bit of work to get used to a new language: syntax, mindset, frameworks, libraries, … Not a walk in the park, but I find it part and parcel of a job.

And because I have changed my “dayjob language” a few times in the past few years, the language that my FLOSS projects were written in tended to change in sync — because that would be the language I’d have the freshest memory in. As I said, I used a ton of Python in the Bubble, and that’s why you can see me releasing a ton of Python projects now. If I had to write something in Ruby, I would have to go back and figure out how much the language changed, and what the current best practices are — this is also true for all of my old projects: if I were to pick up ruby-elf again today, it would probably take me a few days even just to figure out how Ruby evolved since seven years ago (the last time I touched it). If the project didn’t have as much code and test and complexities built in, it would probably be faster to just rewrite the whole thing into a language I have more confidence in. Which probably shows why there’s so much stuff I abandoned over the years.

Of course there are developers who focus on one language and know all of its inside out. They are the ones that make languages possible. But that is not what I do, it’s not my dayjob, and I am much more likely to just accept what the status quo for a project is, and adapt. I can only think once of having rewritten something because of its language — and ironically, that was rewriting Python in Perl. Although in that case, I think the problem was less about the language (despite my ranting back then — little did I know that a few months later I would be forced to learn Python), and more about the language providing a flat battery.

I find that the same is true from a company/employer/project owner point of view — languages are tools. And that means sometimes you stick to the tools you have, and not “buy” more tools (in the form of hiring experts or spending on training), while sometimes you don’t care which tools are used in the shop as long as the results are valid. When you have to build some software that a team has to maintain, for instance, you may not want to introduce a new language just because it’s the preferred language of one of the programmers — even if the language is perfectly fine by itself, you need to consider the cost of relying on a single person being the “expert” in that language.

As a counterexample — before the bubble I was working for a small company (not quite a startup, particularly because it didn’t have any venture capital pouring in, so it did spare expenses). The new product was being developed using the same proprietary language used for a previous breadwinner product. The language was clunky, modelled after early PHP, and hard to wire in with modern HTML, let alone JavaScript (this was 2012). Getting anyone to work on that product would require a significant amount of training, which is why the company owner was the one doing nearly all of the work on it (I did the minimum possible on it myself). Replacing the language would have required re-training the owner to some new language (since they didn’t really know any of the possible alternative languages) but since this was 2012, I kept arguing that it would be significantly cheaper to hire one or two junior developers to reimplement the web side of the product in Rails, leaving the owner working on the Flash application instead — particularly because at the time there wasn’t really much of a web part of the product and reimplementing it would have costed nearly nothing by comparison.

This does not mean that developing new languages is not useful or important. Or that the differences between languages don’t matter. I really enjoy writing Python — but if I needed to write something that is very high performance I wouldn’t go and use Python for it. I’m enjoying CircuitPython as it makes quick prototyping awesome, but I also understand it’s a limitation, as it needs more expensive components to run, and if every cent of your BOM counts, it might be a bad choice.

I also am holding hopes for Rust to become useful as a “base system language” to replace good old C in a bunch of places — if nothing else because removing entire classes of mistakes would be nice. But that is also a compromise: it might introduce new classes of mistakes, and it will have side effects on bootstrapping new architectures. I never expect any migration not to have any costs.

FreeStyle Libre 2 More Encryption Notes

Foreword: I know that I said I wouldn’t put reverse engineering projects as part of the Monday schedule, but I find myself having an unbalance between the two set of posts, and I wanted to get this out sooner rather than later, in the hope someone else can make progress.

You may remember I have been working on the FreeStyle Libre 2 encrypted communication protocol for a few months. I have actually taken a break from my Ghidra deep dive while I tried sorting my future out – and failing, thanks to the lockdown – but I got back to this a couple of weeks ago, since my art project completed, and I wanted to see if sleeping it over a bit meant getting a clearer view of it.

Unfortunately, I don’t think I’m any closer to figuring out how to speak to Libre 2 readers. I did manage to find some more information about the protocol, including renaming one of the commands to match the debug logs in the application. I do have some more information about the encoding though, which I thought I would share with the world, hoping it will help the next person trying to get more details on this — and hoping that they would share it with the world as well.

While I don’t have a final answer on what encryption they use on the Libre 2, I do have at least some visualization of what’s going on in the exchange sequence.

There’s 15 bytes sent from the Libre 2 reader to the software. The first eight are the challenge, while the other seven look like a nonce of some kind, possibly an initialization vector, which is used in the encryption phase only.

To build the challenge response, another eight bytes are filled with random returned by CryptGenRandom, which is a fairly low level, and deprecated, API. This is curious, given that the software itself is using Qt for the UI, but makes more sense when you realise that they use the same exact code in the driver used for uploading to the LibreView service, which is not Qt based. It also likely explains why the encryption is not using the QtCryptography framework at all.

This challenge response is then encrypted with a key — there are two sets of keys: Authorization keys are used only for this challenge phase, and Session keys are used to handle the rest of the communication. Each set includes an Encryption and a MAC key. The Authorization keys are both seeded with just the serial number of the device in ASCII form, and two literal strings, as pictured above: AuthrEnc and AuthrMAC. The session keys’ seeds include a pair of 8-bytes values as provided by the device after the authorization completes.

The encryption used is either a streaming cipher or a 64-bit block cipher. I know that, because I have multiple captures from the same device in which the challenge started with the same 8 bytes (probably because it lacked enough entropy to be properly random at initialization time), and they encrypted to exactly the same output bytes. Since the cleartext adds a random component, if it was a 128-bit block cipher, you would expect different ciphertext in the output — which kind of defeats the purpose of those 8 random bytes I guess?

The encrypted challenge response is then embedded in the response message, which includes four constant bytes (they define the message type, the length, and the subcommand, plus an extra constant byte thrown in), and then processed by the MAC algorithm (with the Authorization MAC key) to produce a 64-bit MAC, that is tackled at the end of the message. Then the whole thing is sent to the device, which will finally start answering.

As far as I can tell, the encryption algorithm is the same for Authorization and Session — with the exception of the different seed to the key generation. It also includes a different way to pass a nonce — the session encryption includes a sequence number, on both the device and the software, which is sent in clear text and fed into the encryption (shifted left by 18 bits, don’t ask me!) In addition to the sequence number, the encrypted packets have an unencrypted MAC. This is 4 bytes, but it’s actually done with the same algorithm as the authorization. The remaining 4 bytes are just dropped on the floor.

There’s a lot more that I need to figure out in the code, because not knowing anything about cryptography (and also not being that good with Ghidra). I know that the key generation and the encryption/decryption functions are parameterized with an algorithm value, which likely corresponds to an enum from the library they used. And that the parameterized functions dispatch via 21 objects (but likely not C++ objects, as they don’t seem to use vtables!), which can either point at a common function that returns an error (pretty much “not implemented”) or to an actual, implemented function — the functions check something: the enum in the case of key creation (which is, by the way, always 9), or some attribute of the object passed in for encryption and decryption.

These are clearly coming from a library linked in statically — I can tell because the code style for these is totally different from any other part of Abbott’s code, and makes otherwise no sense. It also is possibly meant to be obfuscated, or at least made it difficult — it’s not the same object out of the 21 that can answer the encrypt/decrypt function for the object, which makes it difficult to find which code is actually being executed.

I think at this point, the thing that is protecting their Libre 2 protocol the most is just the sheer amount of different styles of code in the binary: Qt, C++ with STL, C++ with C-style arrays, Windows APIs, this strange library, …

By the way, one thing that most likely would help with figuring this out would be if we could feed selected command streams into the software. While devices such as the Facedancer can help, given that most of this work is done in virtual machines, I would rather have my old idea implemented. I might look for time to work on this if I can’t find anyone interested, but if you find that this is an useful idea, I would much prefer being involved but not leading its implementation. Honestly, if I had more resources available, I would probably just pay someone to implement it, rather than buy a hardware Facedancer.

Bragging sARTSurday: Plushies At Home

#plushie #plushieathome #seagull

In this week’s sARTSurday, I want to show off my own creation, for once.

Because of the lockdown, we had to sacrifice not just our conventions, but also our visits to Kew Gardens — and despite it reopening next month, we’re not sure if we feel safe enough to go and visit, since with diabetes I’m considered at risk. And Kew is where I would usually spend some time taking pictures with the weather we had… and that’s not happening any time soon.

Instead, since I started this weekly column, I have been taking quite a few more pictures inside the apartment. I even decided to invest in a couple of accessories for my camera to make it easier to take pictures of those and my art project — namely a flashgun, and an L-bracket (which will be useful even in Kew when I’ll be able to get there again.)

So for the past week or so, inspired by the last post header picture, I decided to take “candid” shots of the plushies that we have home. Most of these used to be on my desk at the office, both in Dublin and London — but given the current situation, they are likely going to stay at home for a while.

Good morning! Your usual?

I’m clearly not a professional photographer, I’m not even a particularly good photographer. But I thought it would make people smile to see them, and that’s all I care about.

If you want to see more pictures, particularly of squirrels, you can find them on Flickr, Facebook (separately from this blog) and Instagram. I have some more pictures to take of Star Wars plushies and LEGO sets, so keep your eyes on them if you’re into those.

More Chinese Glucometers: Sinocare Safe AQ UG

Years ago, I was visiting Shanghai for work, and picked up a Sannuo glucometer. Somehow that blog post keeps getting the interest of people, and indeed it has better stats than some of my more recent glucometer reviews. I found it strange, until one night, while my wife was playing online, I found myself browsing AliExpress and thought “I wonder if they sell glucometers”. Of course they do.

While browsing AliExpress for glucometers it dawned on me not just why so many people kept finding my blog post, but also the answers to a number of questions I had about that meter, that I don’t think I would have otherwise had answers for. And so, I decided to throw some “craic money” to getting another Chinese glucometer to look at.

So, first of all, what’s going on with my blog post? Turns out that there’s a lot of glucometers on AliExpress. Some are at least branded the same as you would find in Europe or the USA with what looks like official OneTouch and Abbott storefronts on AliExpress — which does not surprise me too much, I already found out that Abbott has a physical store in UAE that sells Libre sensors, which work with my Libre reader but not with the mobile phone app. But for the rest you find a lot of generic names that don’t inspire much — until you start noticing two big brands sold by a lot of different sellers: Sannuo and Sinocare. And as I noted in the previous post, the meter was branded Sannuo but had Sinocare names all around — turns out the former is just a brand of the latter.

I also started getting a funny feeling of understanding about the miniUSB plug that was present on the Sannuo meter: most if not all of the generic branded meters had a similar plug. But this was named “code” port. And indeed a few of the Sannuo/Sinocare models had enough explanation in English explained how this works.

Coding of testing strips is something that European and North American (at least) meters used to need in the past. You would get your packet of strips and there would be a number on the outside (the “code”), which you would select on the meter either before or right after fitting the strip inside. The code carried informations about the reactiveness of the strip and was needed to get an accurate reading. Over time, this practice has fallen out of favour with “code-less” strips becoming the norm. In particular in Europe it seems like the old style of coded strips is not even marketed anymore due to regulation.

The Sannuo meter I bought in Shanghai came with two bottles of “codeless” strips, but other strip bottles you can find on AliExpress appear to still be coded. Except instead of a two-digits “code”, they come with a “code chip”, which is pretty much a miniUSB plug connected to something. Which is why they plug is electrically active, but not making any sense when it comes to USB protocol. I have no idea how this idea came though, but I somehow suspect it has to do with miniUSB plugs being really cheap now as nobody want to deal with them.

So back to the latest glucometer I received. I chose this particular model of Sinocare because it had one feature I have never seen on any other meter: a test for uric acid. Now, I have no clue what this is meant to be, and I don’t even pretend I would understand its readings, but it sounded like something I could have at least some fun with. As it turns out this also plays to my idea of figuring out how that coding system works, as despite not needing codes for glucose results, you do need it for the uric acid strips!

The Safe AQ is just as “clunky” as I felt the previous Sannuo to be. And now I have a bit more of an idea of why: they are actually fairly easy to assemble. Unlike the press-fit of most of the “western” designs (terrible name, but it conveys the effect, so please excuse me on this one), these meters are very easy to open up. They even seem to be able to be fixed, if for instance the display was to break. I’m seriously surprised about this. The inside boards are fairly well labelled, too. And very similar between these two otherwise fairly different models. Both meters expose test points for pogo pins under the battery compartment, which are actually properly labelled as well.

Another similarity with the Sannuo, is that this meter – like nearly every meter I could see on AliExpress! – has a spring-loaded ejector for the strips. The box says “automatic” but it just means that you don’t have to touch the strip to throw it away. My best guess is that there’s some cultural significance to this, maybe it’s more common for people to to test someone else’s blood in China, and so the ejector is seen as a big improvement. Or maybe there’s an even stronger disgust with bloodied objects, and the ejector makes things cleaner. I don’t know — if anyone does, please let me know.

Now, how does this work out as a meter? My impression is fairly good overall, but the UX leaves a lot to be desired. The screen is very readable, although not backlit. The accuracy is fairly good when compared with my Libre, both with blood samples and the sensor. But figuring out how to turn it on, and how to change the date/time took a few tries. There’s no feedback that you need to keep pressed the on button for five seconds. But on the other hand the manual has fairly intelligible English, which is probably better than some of the stuff you buy directly on Amazon UK.

There’s a kicker in this whole story of course. Who is Sinocare? You would say that, since I’m always complaining about allowing usage of devices outside their spec, it would be out of character for me to make it easy to find and buy a glucometer that most likely has not been vouched by the local pharmaceutical regulations. And yet I seem to be praising a meter that I got pretty much randomly off AliExpress.

What convinced me to order and write a review about the Sinocare is that, while researching the names I found on AliExpress, I found something very interesting. Sinocare is a Chinese diagnostics company and probably the largest in the country. But they also own Trividia Health, a diagnostic company based in Florida with an UK subsidiary.

Trividia Health was a name that I already knew — they make glucometers called True Metrix, which you can find in USA at Walmart and CVS, and, as I found out more recently, in the UK at Boots. The True Metrix meters don’t seem to share anything, design-wise, with the Sinocare products, but you would expect that the two technology sets are not particularly different.

This also reminds me I need to see if Trividia sells the cradle for the True Metrix Air in the UK. I kept forgetting to try getting one in time to pick it up in the USA during a work trip, and not expecting to be there any time soon sounds like I should try to get one here.

Fake candles, and flame algorithms

The Birch Books LEGO set that I have been modifying has an interesting “fireplace” at the first floor of the townhouse. I have been wanting to wire that up to light up for the evening scenes in my smart lighting board, but I want it to look at least a bit realistic. But how do you do that?

As I previously noted, there’s flame effect LED lamps out there, which were looked at by both bigclive and Adam Savage, this very year. But those are way too big for the scale we’re talking about here. Simplifying this too much, you can think of those lamps as round, monochrome LED panels showing a flame animation, like an old DOS demo. Instead what I have to work with is going to be at most two LEDs — or at least two independent channels for the LEDs.

Thankfully, I didn’t have to look far to find something to learn from. A few months ago a friend of my wife gave us as a present a very cute candle holder, but since we’re renting, that’s not really a good idea. Instead I turned on Amazon (but AliExpress would have done the trick just as well) for some fake candles (LED Candles) that would do the trick. These are interesting because they are not just shaped like a candle, but they have a flickering light like one as well. Three of them in the holder fit fairly nicely and did the trick to give a bit of an atmosphere to our bedroom.

I was full ready to sacrifice one of the candles to reverse engineer it, but the base comes off non-destructively, and that the board inside is very easy to follow. Indeed, you can see the schematic of the board here on the right (I omitted the on/off switch for clarity), even though the board itself has space for more components. The whole of the candle is controlled by a microcontroller, with a PIC12F-compatible pinout (but as Hector pointed out, much more likely to be some random chinese microcontroller instead).

It’s interesting to note that the LED starts in “candle” mode once turning the switch to the “on” position, without using the remote control. My guess is that if you buy one of the versions that does not come with a remote control, you can add that functionality by just soldering a TSOP381x decoder. It also shows why the battery on these things don’t really last as long as you may want it to, despite using the bigger, rarer and more expensive CR2450 batteries. The microcontroller is powered up all the time, waiting to decode some signal from the remote control, even if the LED is off. I wanted to record the current flowing through in standby, but it’s fairly hard to get the battery in series with the multimeter — maybe I should invest on a bench supply for this kind of work.

So how does this all work? The LED turns out to be a perfectly normal warm white LED, with a domed form factor that fits nicely in the carved space in the fake candle itself, and that helps it diffuse it. To produce the flame effect, the microcontroller uses PWM (pulse-width modulation) — which is a pretty common way to modulate intensity of LEDs, and the way most RGB LEDs work to produce combined colours, just like on my insulin reminder. Varying the duty cycle (the ratio between “high” and “low” of the digital line) allows changing the intensity of the light (or of the specific colour for RGB ones). If you keep varying the duty cycle, you get a varying intensity that simulates a flame.

The screenshot you can see is from Saleae Logic software. It shows the variable duty cycle in span of a few seconds, and it’s pretty much unreadable. It’s possible that I can write code for a decoder in the Saleae logic, and export the actual waveform it uses to simulate the flickering of a flame — but honestly that sounds a lot of unjustified work: there’s not really “one” true flame algorithm, as long as the flickering looks the part, it’s going to be fine.

Example of a generated Perlin noise waveform for the LED flickering

So, how do you generate the right waveform? Well, I had a very vague idea of how when I started, but thanks to the awesome people in the Adafruit Discord (shout out to IoTPanic and OrangeworksDesign!) I found quite a bit of information to go by — while there are more “proper” way to simulate a fire, Perlin noise is a very good starting point for it. And what do you know? There’s a Python package for it which happens to be maintained by a friend of mine!

Now there’s a bit of an issue on how to properly output the waveform in PWM — in particular its frequency and resolution. I pretty much just thrown something at the wall, it works, and I’ll refine it later if needed, but the result is acceptable enough for what I have in mind, at least when it comes to just the waveform simulation.

The code I thrown at the wall for this is only going to be able to do the flickering. It doesn’t allow for much control and pretty much expects full control of the execution — almost the same as in the microcontroller of the original board, that literally turns off the moment the IR decoder receives (or thinks it’s receiving) a signal.

I was originally planning to implement this on the Adafruit M4 Express with PWMAudioOut — it’s not audio per-se, but it’s pretty much the same thing. But unfortunately it looks like the module needed for this is not actually built into the specific version of CircuitPython for that Feather. But now we’re in the business of productionizing code, rather than figuring out how to implement it.

Sweet Slice of Life: Sarah Graley

It’s a bittersweet time to post this, but very sweet content in it, so I hope it’ll brighten your days, as all sARTSurdays aim to. This weekend was meant to be the MCM Comic Con weekend in London, but in the current situation, the ExCeL center where it was supposed to take place is still the NHS Nightingale, as far as I can tell. With the rescheduled July date also cancelled, we’re currently not sure when, or even if, we’ll be back to a convention. And in particular at this MCM we were planning to look out for Sarah and Stef (again), to grab the set of Our Super Adventure books.

At the last MCM (October 2019), we were just walking the floor when we saw a giant kitty showing clearly on top of the booths — cat people as we are, me and my wife ran straight towards it. We had at that point no idea what Sarah and Stef were — but the Pesto plushie was too cute not to pick up, so we bought it, and for the following day I had her mischievous glare staring out of my bag.

A little later we were queueing for my photoshoot turn with Simon Pegg and decided to take a look at the comic — and loved it! But content warning: if you’re the type of person who suffers being alone or lonely, it might be bad for your mood. I know that I wouldn’t have appreciated the comic nearly as much, if I hadn’t found it as a newlywed. But otherwise, it’s one of the sweetest and cutest online comic I’ve ever read — and positive, too! It’s not trying to make it sound like life is completely carefree, but it’s also making light fun of the harder moments of a relationship, and that made out outright laugh on the floor of the con.

Our plan for this MCM was to go and see them again, and pick up a book or two (or three) — we had a preference to pick it up directly from them, also to thank them for the many laugh we got from their comic — but given the situation, online ordering will do. And this weekend there’s an exclusive pin thrown in, which was supposed to be exclusive for the MCM. (Sigh, I did say bittersweet, right? Every time I type MCM I’m sighing.)

In addition to Our Super Adventure, which is posted on their website, Sarah’s Instagram and Facebook page, and probably a few more syndication websites, they stream their game sessions on Twitch, where this very Saturday they have been running an “Our Super Stream Con” from home. (Although by the time you read this post it’s probably mostly over, unfortunately).

So if you’re up for a sweet laugh, particularly while laying in bed with your significant other after an exhausting lockdown day, give a read to Sarah’s and Stef’s adventures. You won’t regret it.

Don’t Ignore Windows 10 as a Development Platform for FLOSS

Important Preface: This blog post was written originally on 2020-05-12, and scheduled for later publication, inspired by this short Twitter thread. As such it well predates Microsoft’s announcement of expanding support of WSL2 to graphical apps. I considered trashing, or seriously re-editing the blog post in the light of the announcement, but I honestly lack the energy to do that now. It left a bad taste in my mouth to know that it will likely get drowned out in the noise of the new WSL2 features announcement.

Given the topic of this post I guess I need to add a preface to point out my “FLOSS creds” — because I have seen already too many attacks to people who even use Windows at all. I have been an opensource developer for over fifteen years now, and part of the reason why I left my last bubble was because it made it difficult for me to contribute to various opensource projects. I say this because I’m clearly a supporter of Free Software and Open Source, wherever possible. I also think that’s different people have different needs, and that ignoring that is a failure of the FLOSS movement as a whole.

The “Year of Linux on the Desktop” is now a meme that has been running its course to the point of being annoying. Despite what FLOSS advocates keep saying, “Linux on the Desktop” is not really moving, and while I do have some strong opinions on this, that’s for another day. Most users, and in particular newcomers to FLOSS (both as users and developers) are probably using a more “user friendly” platform — if you leave a comment with the joke on UNIX being selective with its friends, you’ll end up on a plonkfile, be warned.

About ten years ago, it seemed like the trend was for FLOSS developers to use MacBooks as their daily laptops. I did that for a while myself — an UNIX-based platform with all the tools of the trade, which allowed quite a bit of work being done without having access to a Linux platform. SSH, Emacs, GCC, Ruby, and so on. And at the same time, you had the stability of Mac OS X, with the battery life and all the hardware worked great out of the box. But then more recently, Apple’s move towards “walled gardens” seemed to be taking away from this feasibility.

But back to the main topic. Over the past many years, I’ve been using a “mixed setup” — using a Linux laptop (or more recently desktop) for development, and a Windows (7, then 10) desktop for playing games, editing photos, designing PCBs, and for logic analysis. The latter is because Saleae Logic takes a significant amount of RAM when analysing high-frequency signals, and I have been giving my gamestations as much RAM as I can just for Lightroom, so it makes sense to run it on the machine with 128GB of RAM.

But more recently I have been exploring the ability of using Windows 10 as a development platform. In part because my wife has been learning Python, and since also learning a new operating system and paradigm at the same time would have been a bloody mess, she’s doing so on Windows 10 using Visual Studio Code and Python 3 as distributed through the Microsoft Store. While helping her, I had exposure to Windows as a Python development platform, so I gave it a try when working on my hack to rename PDF files, which turned out to be quite okay for a relatively simple workflow. And the work on the Python extension keeps making it more and more interesting — I’m not afraid to say that Visual Studio Code is better integrated with Python than Emacs, and I’m a long-time user of Emacs!

In the last week I have actually stepped up further how much development I’m doing on Windows 10 itself. I have been using HyperV virtual machines for Ghidra, to make use of the bigger screen (although admittedly I’m just using RDP to connect to the VM so it doesn’t really matter that much where it’s running), and in my last dive into the Libre 2 code I felt the need to have a fast and responsive editor to go through executing part of the disassembled code to figure out what it’s trying to do — so once again, Visual Studio Code to the rescue.

Indeed, Windows 10 now comes with an SSH client, and Visual Studio Code integrates very well with it, which meant I could just edit the files saved in the virtual machine and have the IDE also build them with GCC and executing them to get myself an answer.

Then while I was trying to use packetdiag to prepare some diagrams (for a future post on the Libre 2 again), I found myself wondering how to share files between computers (to use the bigger screen for drawing)… until I realised I could just install the Python module on Windows, and do all the work there. Except for needing sed to remove an incorrect field generated in the SVG. At which point I just opened my Debian shell running in WSL, and edited the files without having to share them with anything. Uh, score?

So I have been wondering, what’s really stopping me from giving up my Linux workstation for most of the time? Well, there’s hardware access — glucometerutils wouldn’t really work on WSL unless Microsoft is planning a significant amount of compatibility interfaces to be integrated. Similar for using hardware SSH tokens — despite PC/SC being a Windows technology to begin with. Screen and tabulated shells are definitely easier to run on Linux right now, but I’ve seen tweets about modern terminals being developed by Microsoft and even released FLOSS!

Ironically, I think it’s editing this blog that is the most miserable experience for me on Windows. And not just because of the different keyboard (as I share the gamestation with my wife, the keyboard is physically a UK keyboard — even though I type US International), but also because I miss my compose key. You may have noticed already that this post is full of em-dashes and en-dashes. Yes, I have been told about WinCompose, but last time I tried using it, it didn’t work and even screwed up my keyboard altogether. I’m now trying it again, at least on one of my computers, and if it doesn’t explode in my face again, I may just give it another try later.

And of course it’s probably still not as easy to set up a build environment for things like unpaper (although at that point, you can definitely run it in WSL!), or to have a development environment for actual Windows applications. But this is all a matter of different set of compromises.

Honestly speaking, it’s very possible that I could survive with a Windows 10 laptop for my on-the-go opensource work, rather than the Linux one I’ve been using. With the added benefit of being able to play Settlers 3 without having to jump through all the hoops from the last time I tried. Which is why I decided that the pandemic lockdown is the perfect time to try this out, as I barely use my Linux laptop anyway, since I have a working Linux workstation all the time. I have indeed reinstalled my Dell XPS 9360 with Windows 10 Pro, and installed both a whole set of development tools (Visual Studio Code, Mu Editor, Git, …) and a bunch of “simple” games (Settlers, Caesar 3, Pharaoh, Age of Empires II HD); Discord ended up in the middle of both, since it’s actually what I use to interact with the Adafruit folks.

This doesn’t mean I’ll give up on Linux as an operating system — but I’m a strong supporter of “software biodiversity”, so the same way I try to keep my software working on FreeBSD, I don’t see why it shouldn’t work on Windows. And in particular, I always found that providing FLOSS software on Windows a great way to introduce new users to the concept of FLOSS — focusing more on providing FLOSS development tools means giving an even bigger chance for people to build more FLOSS tools.

So is everything ready and working fine? Far from it. There’s a lot of rough edges that I found myself, which is why I’m experimenting with developing more on Windows 10, to see what can be improved. For instance, I know that the reuse-tool has some rough edges with encoding of input arguments, since PowerShell appears to still not default to UTF-8. And I failed to use pre-commit for one of my projects — although I have not taken notice yet much of what failed, to start fixing it.

Another rough edge is in documentation. Too much of it assumes only a UNIX environment, and a lot of it, if it has any support for Windows documentation at all, assumes “old school” batch files are in use (for instance for Python virtualenv support), rather than the more modern PowerShell. This is not new — a lot of times modern documentation is only valid on bash, and if you were to use an older operating system such as Solaris you would find yourself lost with the tcsh differences. You can probably see similar concerns back in the days when bash was not standard, and maybe we’ll have to go back to that kind of deal. Or maybe we’ll end up with some “standardization” of documentation that can be translated between different shells. Who knows.

But to wrap this up, I want to give a heads’ up to all my fellow FLOSS developers that Windows 10 shouldn’t be underestimated as a development platform. And that if they intend to be widely open to contributions, they should probably give a thought of how their code works on Windows. I know I’ll have to keep this in mind for my future.

Upcoming electronics projects (and posts)

Because of a strange alignment between my decision to leave Google to find a new challenge, and the pandemic causing a lockdown of most countries (including the UK, where I live), you might have noticed more activity on this blog. Indeed for the past two months I maintained an almost perfect record of three posts a week, up from the occasional post I have written in the past few years. In part this was achieved by sticking to a “programme schedule” — I started posted on Mondays about my art project – which then expanded into the insulin reminder – then on Thursday I had a rotating tech post, finishing the week up with sARTSurdays.

This week it’s a bit disruptive because while I do have topics to fill in the Monday schedule, they start being a bit more scatterbrained, so I want to give a bit of a regroup, and gauge what’s the interest around them in the first place. As a starting point, the topic for Mondays is likely going to stay electronics — to follow up from the 8051 usage on the Birch Books, and the Feather notification light.

As I have previously suggested on Twitter, I plan on controlling my Kodi HTPC with a vintage, late ’80s Sony SVHS remote control. Just for the craic, because I picked it up out of nostalgia, when I went to Weird Stuff a few years ago — I’m sad it’s closed now, but thankful to Mike for having brought me there the first time. The original intention was to figure out how the complicated VCR recording timer configuration worked ­— but not unexpectedly the LCD panel is not working right and that might not be feasible. I might have to do a bit more work and open it up, and that probably will be a blog post by itself.

Speaking of Sony, remotes and electronics — I’m also trying to get something else to work. I have a Sony TV connected to an HDMI switcher, and sometimes it get stuck with the ARC not initializing properly. Fixing it is relatively straightforward (just disable and re-enable the ARC) but it takes a few remote control button presses… so I’m actually trying to use an Adafruit Feather to transmit the right sequence of infrared commands as a macro to fix that. Which is why I started working on pysirc. There’s a bit more than that to be quite honest, as I would like to have a single-click selection of inputs with multiple switchers, but again that’s going to be a post by itself.

Then there’s some trimming work for the Birch Books art project. The PCBs are not here yet, so I have no idea if I have to respin them yet. If so, expects a mistakes-and-lessons post about it. I also will likely spend some more time figuring out how to make the board design more “proper” if possible. I also still want to sit down and see how I can get the same actuator board to work with the Feather M0 — because I’ll be honest and say that CircuitPython is much more enjoyable to work with than nearly-C as received by SDCC.

Also, while the actuator board supports it, I have currently left off turning on the fireplace lights for Birch Books. I’m of two minds about this — I know there are some flame effect single-LEDs out there, but they don’t appear to be easy to procure. Both bigclive and Adam Savage have shown flame-effect LED bulbs but they don’t really work in the small scale.

There are cheap fake-candle LED lamps out there – I saw them the first time in Italy at the one local pub that I enjoy going to (they serve so many varieties of tea!), and I actually have a few of them at home – but how they work is by using PWM on a normal LED (usually a warm light one). So what I’m planning on doing is diving into how those candles do that, and see if I can replicate the same feat on either the 8051 or the Feather.

I don’t know when the ESP32 boards I ordered will arrive, but probably will spend some time playing with those and talking about it then. It would be nice to have an easy way to “swap out the brains” of my various projects, and compare how to do things between them.

And I’m sure that, given the direction this is going, I’ll have enough stuff to keep myself entertained outside of work for the remaining of the lockdown.

Oh, before I forget — turns out that I’m now hanging out on Discord. Adafruit has a server, which seems to be a very easygoing and welcoming way to interact with the CircuitPython development team, as well as discussing options and showing off. If you happen to know of welcoming and interesting Discord servers I might be interested in, feel free to let me know.

I have not forgotten about the various glucometers I acquired in the past few months and that I still have not reversed. There will be more posts about glucometers, but for those I’m using the Thursday slot, as I have not once gone down to physically tapping into them yet. So unless my other electronics projects starve out that’s going to continue that way.

Metal Spotlight: Beast in Black

I have overlooked music in this past series of sARTSurdays, and it’s time to fix this mistake, with a metal band that is close to my heart — Beast in Black. And the reason they are close to my heart is that it was thanks to them that I met my wife — she was coming to the Rhapsody Reunion concert to see them being support, while I was there for the main act.

As you probably can guess by, uh, everything up to now (title, Rhapsody’s involvement, the style of the T-Shirts, …), Beast in Black are a metal band, and so if you don’t like that kind of music it’s unlikely you’ll be interested, but if you are, stay with me. It’s not just metal, it’s metal with 80s throwbacks, pretty much what our generation would find nostalgic if we ever went into that kind of music. Which is why my wife loved them from early on, and I found myself appreciating them with gusto.

I think that for me personally, part of the pleasure is that they are not bass-heavy music — my ears tends to prefer higher pitch sounds (funny how ears work), which is why I originally started listening to Dragonforce. So between Yannis’s voice and Anton’s guitar work, my wife didn’t have much work to convince me.

Speaking of Yannis, make sure you check out his YouTube Channel — in addition to singing in Beast in Black, he’s releasing vocal covers of… lots. Nightwish? Check. Disney’s Frozen? Check. Zayn (uh?)? Check. I shouldn’t be surprised that he seems to have quite the fan club, as proven by the folks we chatted with in the queue to see them in Amsterdam.

Okay so I should probably point out that we can come out a bit… strong in our support. After seeing them in London the night we met, my wife went to Japan explicitly to see them at a festival there, and together we saw them again in London (twice), and then went to Amsterdam and Budapest for two of their concerts — taking the time to make a proper holiday out of them. And thinking back, I’m fairly sure I gained a few kilos in Budapest, the food was so good.

You may have noticed from the t-shirt picture the Beast riding a very surprised unicorn. For once this is not a reference to Unstable Unicorns, but rather to the band joining the Scottish band Gloryhammer in their British Isles tour — Gloryhammer being known for the Unicorn Invasion of Dundee, which does make me wonder whether there’s something up in Scotland when it comes to supernatural invasions.

So, pick your poison between Spotify, Google Play Music YouTube Music, Apple Music, Amazon Music, CDs, vinyl, cassette tape — and have a listen. Pump up the volume (if you can, your mileage may vary depending on whether your neighbours would like the music), and enjoy some “expensive cheese”, as Derek once said.