Beurer GL50, Linux and Debug Interfaces

In the previous post when I reviewed the Beurer GL50, I have said that on Windows this appears as a CD-Rom with the installer and portable software to use to download the data off it. This is actually quite handy for the users, but of course leaves behind users of Linux and macOS — except of course if you wanted to use the Bluetooth interface.

I did note that on Linux, the whole device does not work correctly. Indeed, when you connect this to a modern Linux kernel, it’ll fail to mount at all. But because of the way udev senses a new CD-Rom being inserted, it also causes an infinite loop in the userspace, making udev use most of a single core for hours and hours, trying to process CD in, CD out events.

When I noticed it I thought it would be a problem in the USB Mass Storage implementation, but at the end of the day the problem turned out to be one layer below that and be a problem in the SCSI command implementation instead. Because yes, of course USB Mass Storage virtual CD-Rom devices still mostly point at SCSI implementations below.

To provide enough context, and to remind myself how I went around this if I ever forget, the Beurer device appears to use a virtual CD-Rom interface on a chip developed by either Cygnal or Silicon Labs (the latter bought the former in 2003). I only know the Product ID of the device as 0x85ED, but I failed trying to track down the SiliconLabs model to figure out why and how.

To find may way around the Linux kernel, and try to get the device to connect at all, I ended up taking a page off marcan’s book, and used the qemu’s ability to launch a Linux kernel directly, with a minimum initramfs that only contains the minimum amount of files. In my case, I used the busybox-static binary that came with OpenSuse as the base, since I didn’t need any particular reproduction case beside trying to mount the device.

The next problem was figuring out how to get the right debug information. At first I needed to inspect at least four separate parts of the kernel: USB Mass Storage, the Uniform (sic) CD-Rom driver, the SCSI layer, and the ISO9660 filesystem support — none of those seemed a clear culprit at the very beginning, so debugging time it was. Each of those appear to have separate ideas of how to do debugging at all, at least up to version 5.3 which is the one I’ve been hacking on.

The USB Mass Storage layer has its own configuration option (CONFIG_USB_STORAGE_DEBUG), and once enabled in the kernel config, a ton of information on the USB Mass Storage is output on the kernel console. SCSI comes with its own logging support (CONFIG_SCSI_LOGGING) but as I found a few days of hacking later, you also need to enable it within /proc/sys/dev/scsi/logging_level, and to do so you need to calculate an annoying bitmask — thankfully there’s a tool in sg3_utils called scsi_logging_level… but it says a lot that it’s needed, in my opinion. The block layer in turn has its own CONFIG_BLK_DEBUG_FS option, but I didn’t even manage to look at how that’s configured.

The SCSI CD driver (sr), has a few debug outputs that need to be enabled by removing manual #if conditions in the code, while the cdrom driver comes with its own log level configuration, a module parameter to enable the logging, and overall a complicated set of debug knobs. And just enabling them is not useful — at some point the debug output in the cdrom driver was migrated to the modern dynamic debug support, which means you need to enable the debugging specifically for the driver, and then you need to enable the dynamic debug. I sent a patch to just remove the driver-specific knobs.

Funnily enough, when I sent the first version of the patch, I was told about the ftrace interface, which turned out to be perfect to continue sorting out the calls that I needed to tweak. This turned into another patch, that removes all the debug output that is redundant with ftrace.

So after all of this, what was the problem? Well, there’s a patch for that, too. The chip used by this meter does not actually include all the MMC commands, or all of the audio CD command. Some of those missing features are okay, and an error returned from the device will be properly ignored. Others cause further SCSI commands to fail, and that’s why I ended up having to implement vendor-specific support to mask away quite a few features — and gate usage in a few functions. It appears to me that as CD-Rom, CD-RW, and DVDs became more standard, the driver stopped properly gating feature usage.

Well, I don’t have more details of what I did to share, beside what is already in the patches. But I think if there’s a lesson here, is that if you want to sink your teeth into the Linux kernel’s code, you can definitely take a peek at a random old driver, and figure out if it was over-engineered in a past that did not come with nice trimmings such as ftrace, or dynamic debug support, or generally the idea that the kernel is one big common project.

Glucometer Review: beurer GL50 evo

I was looking for a new puzzle to solve after I finally finished with the GlucoRx Nexus (aka TaiDoc TD-4277), so I decided to check out what Boots, being one of the biggest pharmacy in the country, would show on their website under “glucometer”. The answer was the Beurer GL-50, which surprised me because I didn’t know Beurer did glucometers at all. It also was extremely overpriced at £55. But thankfully I found it for £20 at Argos/eBay, so I decided to give it a try.

The reason why I was happy to get one was that the the device itself looked interesting, and reminded me of the Accu-Chek Mobile, with its all-in-one design. While the website calls it a 3-in-1, there are only two components to the device: the meter itself and the lancing device. The “third” device is the USB connector that appears when you disconnect the other two. I have to say that this is a very interesting approach, as it makes it much easier to connect to a computer — if it wasn’t that the size of the meter makes it very hard to connect it.

On my laptop, I can only use it on the USB plug on the right, because on the left, it would cover the USB-C plug I use to charge it. It’s also fairly tall, which makes it hard to use on chargers such as my trusted Anker 5-port USB-C (of which I have five, spread across rooms.) At the end, I had to remove two cables from one of them to be able to charge the meter, which is required for it to be usable at all, when it arrives.

To be honest, I’m not sure if the battery being discharged was normal or due to the fact that the device appears to have been left on shelves for a while: the five sample strips to test the device expire in less than two months. I guess it’s not the kind of device that flies off the shelves.

FreeStyle Libre, gl50 evo, GlucoRx Nexus

So how does the device fare compared to other meters? Size wise, it’s much nicer to handle than the GlucoRx, although it looks bigger than the FreeStyle Libre reader. Part of the reason is that the device, in its default configuration, includes the lancing device, unlike both of the meters I’m comparing it with above. If you don’t plan to use the included lancing device, for instance because you have a favourite lancing device like me (I’m partial to the OneTouch Delica), you can remove the lancing device and hide the USB plug with the alternative provider cap. The meter then takes a much smaller profile than the Libre too. I actually like the compact size better than the spread out one of the FreeStyle Precision Neo.

FreeStyle Libre, gl50 evo (without lancing device), GlucoRx Nexus

Interface-wise, the gl50 is confusingly different from anything I have seen before. It comes with a flush on/off switch on the side, which would be frustrating for most people with short nails, or for people with impeded motion control. Practically, I think this and the “Nexus” are at opposite ends of the scale — the TD-4277 has big, blocky display that can be read without glasses and a single, big button, which makes it a perfect meter for the elderly. The gl50 is frustrating even for me in my thirties.

The flush switch is not the only problem. After you turn it on, the control you have is a wheel, which can be clicked. So you navigate menus in up-down-click. Not very obvious but feasible. But since the wheel can easily be pressed in your purse, that’s why you got the flush switch, I guess. The UI is pretty much barebone but it includes the settings for enabling Bluetooth (with a matching Android app, which I have not checked out for this review yet), and NFC (not sure what for). Worthy of note is that the UI defaults to German, without asking you, and you need to manage to get to the settings in that language to switch to English, Italian, French, or Spanish.

Once you plug it into a computer with Windows, the device appears as a standard CD-Rom UMS device that includes an auto-started “portable” version of the download software, which is a very nice addition, again reminiscent of the Accu-Chek Mobile. It also comes with an installer for the onboard software. As a preview of the technical information post on this meter, it looks like that, similar to the OneTouch Verio, the readings are downloaded through UMS/SCSI packets.

I called out Windows above because I have not checked how this even presents on macOS, and on Linux… it doesn’t. It looks like I may have to take some time to debug the kernel, because what I get on Linux is infinite dmesg spam. I fear the UMS implementation on the meter is missing something, and Linux sends a command that the meter does not recognize.

The software itself is pretty much bland, and there’s nothing really much to say. It does not appear to have a way to even set or get the time for the device, which in my case is still stuck in 2015, because I couldn’t bother yet to roll the wheel all the way to today.

Overall, I wouldn’t recommend this meter over any of the other meters I have or used. If beurer keeps staying in the market of glucometers (assuming they are making it themselves, rather than rebranding someone else’s, like GlucoRx and Menarini appear to do), then it might be an interesting start of further competition in Europe, which I would actually appreciate.

Glucometer notes: GlucoRx Nexus

This is a bit of a strange post, because it would be a glucometer review, except that I bought this glucometer a year and a half ago, teased a review, but don’t actually remember if I ever wrote any notes for it. While I may be able to get a new feel for the device to write a review, I don’t even know if the meter is still being distributed, and a few of the things I’m going to write here suggest me that it might not be the case, but who knows.

I found the Nexus as an over-the-counter boxed meter at my local pharmacy, in London. To me it appears like the device was explicitly designed to be used by the elderly, not just because of the large screen and numbers, but also because it comes with a fairly big lever to drop out the test strip, something I had previously only seen in the Sannuo meter.

This is also the first meter I see with an always-on display — although it seems that the backlight turns on only when the device is woken up, and otherwise is pretty much unreadable. I guess they can afford this type of display given that the meter is powered by 2 AAA batteries, rather than CR2032 like others.

As you may have guessed by now from the top link about the teased review, this is the device that uses a Silicon Labs CP2110 HID-to-UART adapter, for which I ended up writing a pyserial driver, earlier this year. The software to download the data seems to be available from the GlucoRx website for Windows and Mac — confusingly, the website you actually download the file from is not GlucoRx’s but Taidoc’s. TaiDoc Technology Corporation being named on the label under the device, together with MedNet GmbH. A quick look around suggests TaiDoc is a Taiwanese company, and now I’m wondering if I’m missing a cultural significance around the test strips, or blood, and the push-out lever.

I want to spend a couple notes about the Windows software, which is the main reason why I don’t know if the device is still being distributed. The download I was provided today was for version 5.04.20181206 – which presumes the software was still being developed as of December last year – but it does not seem to be quite tested to work on Windows 10.

The first problem is that that the Windows Defender malware detection tool actually considers the installer itself as malware. I’m not sure why, and honestly I don’t care: I’m only using this on a 90-days expiring Windows 10 virtual machine that barely has access to the network. The other problem, is that when you try to run the setup script (yes, it’s a script, it even opens a command prompt), it tries to install the redistributable for .NET 3.5 and Crystal Reports, fail and error out. If you try to run the setup for the software itself explicitly, you’re told you need to install .NET 3.5, which is fair, but then it opens a link from Microsoft’s website that is now not found and giving you a 404. Oops.

Setting aside these two annoying, but not insurmountable problems, what remains is to figure out the protocol behind the scenes. I wrote a tool that reads a pcapng file and outputs the “chatter”, and you can find it in the usbmon-tools repository. It’s far from perfect and among other things it still does not dissect the actual CP2110 protocol — only the obvious packets that I know include data traffic to the device itself.

This is enough to figure out that the serial protocol is one of the “simplest” that I have seen. Not in the sense of being easy to reverse, but rather in term of complexity of the messages: it’s a ping-pong protocol with fixed-length 8-bytes messages, of which the last one is a simple checksum (sum-modulo-8-bit), a fixed start byte of 0x51, and a fixed end with a bit for host-to-device and device-to-host selection. Adding to the first nibble of the message to always have the same value (2), it brings down the amount of data to be passed for each message to 34-bit. Which is a pretty low amount of information even when looking at simple information as glucose readings.

At any rate, I think I already have a bit of the protocol figured out. I’ll probably finish it over the next few days and the weekend, and then I’ll post the protocol in the usual repository. Hopefully if there are other users of this device they can be well served by someone writing a tool to download the data that is not as painful to set up as the original software.

Fishy Facebook Ads: Earthly Citizens, Shutter & Contrast, and many more

(If you prefer this in form of a Twitter thread, see this one.)

Let’s start with the usual disclaimer that despite me working for a company that sells advertisement, this post is my own personal opinion, not my employer’s. I have written about Internet ads for years, well before I joined the company, and so it’s nothing new. To the usual disclaimer I’m going to add a few words to point out that there will be a few company names used in this post — I’ll be very clear when I think they are involved in something fishy, and when I think they are not involved at all.

This all starts with me deciding to get myself a new camera. While I’m very happy about the photos that my usual camera produce, I wanted something lighter that I could go around town more often with. But I also have been having issues with my shoulder, and I’ve been looking out for a good “handy” backpack to keep my stuff in. This is all relevant information.

Indeed, if you follow me on Twitter you may have seen me asking around for suggestions on backpacks. And this is also relevant: since I’m actually not minding ads for relevant content for myself, I have not hidden my looking for a new bag, I spoke about it on social media, and I have searched for backpacks and bags on my normal Google session. This is, again, all relevant information.

Because of my Google searches, I have been seeing a lot of ads related to photography. Including the one for the chain of photography stores that convinced me to go and grab my new camera from them. Very few of those ads are useful to me, but that one in particular have been.

Then the other day, on Instagram, I saw the ads for a backpack from a never-heard-before company advertising as Earthly Citizens. I’m not going to link directly to their website, although I’m choosing to explicitly name them here so that people who may be looking for them on Google and other search engines have a landing page helping them. The backpack that they advertised is this one (archived link) and it actually looks very nice in theory, on offer at £87.75 compared to a RRP of £159.61. To compare, my trusty Think Tank Airport Essentials is £147.04, and that’s one hell of a good bag.

The amount of red flags on that advertisement was high: unknown brand, no branding on the actual bag, unrealistic “flash sale” with no dates on it, and so on. So I didn’t really pay much attention. Then of course, since I have looked at the ad, I started seeing the same bag on Facebook — together with nearly 900 positive comments. I decided to do a minimum amount of digging into it, and found out that the website that the ad points to is a standard Shopify instance, which means that digging into it with IP addresses or WhoIs information is useless. And since there’s no address provided for the company even on privacy pages, there’s not much to go by. I walked away.

A day later, another set of ads start appearing on my Facebook stream, and they are for a backpack that is stunningly similar, or rather identical. But from a different page that has a more “photography” feel to it, called “Shutter & Contrast”. And that piqued my interest a little bit, because it sounded like another one of those cloned bags that I have seen aplenty on Instagram, and I would actually like to find the source at that point.

Just like Earthly Citizens, Shutter & Contrast don’t seem to be very well reviewed. Searching the web for the name and combination of reviews, backpack and scam don’t bring up anything useful. They also have a Shopify site, although their page for the same backpack (archived, again) is a bit more somber and “professional-looking”.

Funnily enough, it looks like they have blocked copy-paste and right-click, so that you can’t quickly reverse-image-search their photos. It didn’t surprise me, as I remembered a BuzzFeed article on fake fashion stores outright stealing real designers’ photos, so stopping the quickest reverse image search option would obviously be high in their intentions. Of course it’s actually easy to work this around, with any of the browsers’ developer tools.

Another interesting part from the Shutter & Contrast shop page is that they actually have an address in their Privacy Page: 11923 NE Sumner St, STE 813872, Portland, Oregon, 97220, USA. Again I’m repeating it here for sake of those looking any information on this company, because if you look up the address, you’ll probably find a Yelp page for a closed location called My Trail Gear, although it has a different “STE” number. The reviews, calling this a scam and pointing out that there is at least two more companies using the address, called “Bear and Tees” and “Shark and Tees”.

Checking the address on StreetView shows a smallish warehouse. My best guess is that there’s a service at that address that is similar to Ireland’s Parcel Motel and Parcel Wizard: companies that allow you to receive and send goods from that address, and then forward it somewhere else. The different “STE” numbers are used to route the parcels to the right customer. This means that despite the bad reviews on Yelp, Shutter & Contrast might be legit.

So I decided to take a closer look at the first one again. Earthly Citizen has a fairly active Facebook page, and if you read their About section, it says:

Our goal is to source all the best travel related documents from all around the world and bring them directly to your doorstep

Earthly Citizens Facebook Page

They don’t seem to be doing anything like that. Instead they seem to mostly re-post Instagram pictures by other people. At least it appears they are crediting the photographers — but it’s clear that they are using someone else’s pictures for their own marketing (so that they get people to follow their account). This should be worrisome enough, but it doesn’t stop there.

If you look at what they sell, they appear to be selling a lot of random stuff that you would find in those trinkets/gadgets shop in big malls, without brands, rhyme, or reason. So it does not look like they are the “source” of that bag to begin with. But is Shutter & Contrast then?

Earthly Citizens say that there are “too many fake websites that steal content”. They would know since they seem to be one.

A very quick reverse image search finds the same exact image appears on AliExpress (not archived because they seem to defeat it), the Chinese shopping website. There are multiple sellers for it there as well, and most of them have the same images — the same images that both Earthly Citizens and Shutter & Contrast used on their website.

It might very well be that these are the bag equivalent of Gongkai, as there are a few stores that sell them, and the fact that they come from Guangdong does not mean they are not good. I have a lovely tripod I bought at the Shanghai Xing Guang Photography Market, it’s a Chinese brand, it’s proper carbon fiber, and I paid for it half the price that you would pay in store in Europe, taxes included. If that is the case, the markups that Earthly Citizens and Shutter & Contrast are applying are thievery: they price it at $110 and $83 respectively, while AliExpress’s most expensive seller has it at $52.

But there is one thing that I forgot about during my Twitter rant, and that my girlfriend pointed out: what about the pictures of people in the advertising? Neither AliExpress nor Earthly Citizens appear to have a picture of the backpack with a person. There are people with cameras, but nobody with the actual backpack that you can reverse image search for. There is a video on Earthly Citizens’s Facebook page, which is the same used by the Instagram ad, and that suggests that the bag physically exist, but it’s heavily watermarked that makes it hard to find the source on. Shutter &Contrast has a video unlisted on YouTube, on a white background with no logos shown, and just re-captioned to fit their marketing of it. It appears uploaded in February 2019.

More useful, Shutter & Contrast appear to also have a still picture of someone wearing what looks like the backpack they are selling, and that’s the first time in this adventure I managed to find that. Reverse image search brings us to yet another Shopify instance under the name ConnectedTechPacks (archived), which can also be found as BestGearPack. Their website is a bit more well made, and it appears to only sell that single backpack. Are they the source? I doubt so, since both websites were registered in April this year, and we know that the backpack existed in February. But they also have a couple of different people with the same backpack, and another angle of the same guy.

Another reverse image search later finds yet another Shopify instance with the same backpack, a set of GIF animations that are also heavily watermarked, but are the same as Earthly Citizens’s version.

So where did all this investigation bring us? Not really anywhere. I can’t find any trustworthy brand selling the backpack, and while I may be willing to risk my £40 on the AliExpress version – rather than twice as much with any of the other Shopify instances that I found – I don’t hold my breath for it to look at all like they show it, or have the build quality that I would trust my cameras with.

It does show just how easy it is to fool people nowadays. It’s easy to set up a “storefront” without needing an actual space anymore. It’s easy to “gain trust” by having people follow your page with no original content, just by re-posting content that professionals provided.

What about the 900 positive comments that the ad received? Well it’s possible that they are actual real satisfied customers who didn’t realize they got charged probably twice as much as they should have for the same bag you can get from AliExpress. Or they may be “bought engagement”. Or just a bunch of bots that have harvested someone else’s name and pictures to create fake profile to sell the stuff.

You know all the panic around politics and elections and fake profiles? It’s not just the elections. Fake profiles sell scams. And that can hurt people just as much as political elections. I remember when it was just the artists complaining about pages re-posting their content… we should have paid attention then. Now the same pages and the same techniques are used for more nefarious purposes and we all pay the price, sooner or later.

A FreeStyle Libre Update

The last time I wrote anything interesting about Abbott’s flash glucose monitor (don’t call it a CGM) was when I compared it with the underwhelming Dexcom G6. I thought it would be a good time to provide an update, what with Abbott sending a number of email reminding you to update their FreeStyle LibreLink app in the past couple of weeks.

First of all, there’s the matter of supplies. Back in January, I decided to test Dexcom’s CGM because Abbott’s supply issues bit me in the backside, as I could not get new sensors to keep up with my usage — particularly as the more active life in London with my girlfriend meant losing a couple more sensors to mistakes, such as bumping into the doorframe. For a while, you could only buy three sensors every 25 days, and even then, sometimes the lead time to fulfill the order would be over a week; nowadays this appears to be much better, and the time limit for the orders was removed recently.

Since I was not particularly thrilled to switch to the Dexcom G6, I had to find a way around these limits, beside counting on the two extra sensors I “gained” by not using the Libre for a month. Luck was that a friend of my girlfriend found the Libre sensors on sale in a brick-and-mortar store in Sharjah, and managed to send me six of them. The store had no limits on how many sensors you could buy, despite the FreeStyle UK website only allowing orders of three at most, and only to already-established customers.

The UAE-bought sensors are effectively the same as the British ones, with the same manufacturing information printed on them, and even similar enough lot numbers. The most visible difference is that the two alcohol-soaked tissues, provided for cleaning the insertion point, are missing.

The other difference is not visible in the packaging, or indeed on the hardware itself: the sensors are region-locked. Or maybe we should say that the app is. As it is, my (UK) FreeStyle LibreLink install did not want to set up the UAE-bought sensors. The reader device had no such concern and both initialised and read them just fine. I was originally a bit concerned and spot-checked the values with fingersticks, but it looked like there was no issue with the sensors at all.

I’ve been wondering just how much the supply problem connects with the region locking. Or just how fine-grained the region locking is: my Irish sensors worked perfectly fine with the UK app, although by that point, the app was not available in Ireland at all. But possibly all of these problems are gone.

Now, to go back to Abbott’s email messages to update their LibreLink app. The reason for this update is not much about the UI of the app itself – although that did change a bit, in subtle and annoying ways – but rather a change in their algorithm for turning the sensors’ readings into a human-understandable blood glucose reading. The “curve”, as it’s sometimes referred to. It’s important to note that what the sensors communicate with either the app or the reader device are not “fully cooked” blood sugar readings, but rather a set of different sensors reading, and that the app and reader will then apply some formulas to provide an equivalent reading to a fingerstick.

Much more interesting to me, in the announcement of the new curve, is that they also suggest users to update the firmware of reader devices to make use of the new fine-tuned algorithm. This is interesting because it makes the FreeStyle Libre the first glucometer with an upgradeable firmware. I have not actually run the update myself, yet. It needs to be done just before changing the sensor, as the reader will forget about its last sensor at that point, and I’m a bit worried that it might not work with UAE-bought sensors anymore after that. So I’m instead waiting to finish the supply of those sensors, and maybe get another one later to test after the update.

I also want to try to get a usbmon trace of the whole procedure for the firmware update. I’m not sure when Abbott will ever publish another update for the reader, but at least starting collecting the protocol would be interesting. Once I do that, you can expect another blog post on the topic.

And as a final note, glucomterutils is being updated as I type this to support reading and setting patient names. While I would not suggest people to use that field for their own personal glucometer, I thought it would be nice to provide the building block for more doctor-focused apps to be built out of it. As a reminder, the code is released under the MIT license, because using it to build something else is a primary focus of it — we need better tooling for glucometers, and not just in the Free Software world, but in the world in general!

Boot-to-Kodi, 2019 edition

This weekend I’m oncall for work, so between me and my girlfriend we decided to take a few chores off our to-do lists. One of the things for me was to run the now episodic maintenance over the software and firmware of the devices we own at home. I call it episodic, because I no longer spend every evening looking after servers, whether at home or remote, but rather look at them when I need to.

In this case, I honestly forgot when it was the last time that I ran updates on the HTPC I use for Kodi and for the UniFi controller software. And that meant that after the full update I reached the now not uncommon situation that Kodi refused to start at boot. Or even when SSH’ing into the machine and starting the service by hand.

The error message, for ease of Googling, is:

[  2092.606] (EE) 
Fatal server error:
[  2092.606] (EE) xf86OpenConsole: Cannot open virtual console 7 (Permission denied)

What happens in this case is that the method I have been using to boot-to-Kodi was to use a systemd unit lifted from Arch Linux, that started a new session, X11, and Kodi all at once. This has stopped working now, because Xorg no longer can access the TTY, because systemd does not think it should access the console.

There supposedly are ways to convince systemd that it should let the user run X11 without so much fluff, but after an hour trying a number of different combinations I was not getting anywhere. I finally found one way to do it, and that’s what I’m documenting here: use lightdm.

I have found a number of different blog posts out there that try to describe how to do this, but none of them appear to apply directly to Gentoo.

These are the packages that would be merged, in order: 
 
Calculating dependencies... done! 
[ebuild   R    ] x11-misc/lightdm-1.26.0-r1::gentoo  USE="introspection -audit -gnome -gtk -qt5 -vala" 0 KiB

You don’t need Gtk, Qt or GNOME support for lightdm to work. But if you install it this way (which I’m surprised is allowed, even by Gentoo) it will fail to start! To configure what you need, you would have to manually write this to /etc/lightdm/lightdm.conf:

[Seat:*] 
autologin-user=xbmc 
user-session=kodi 
session-wrapper=/etc/lightdm/Xsession

In this case, my user is called xbmc (this HTPC was set up well before the rename), and this effectively turns lightdm into a bridge from systemd to Kodi. The kodi session is installed by the media-tv/kodi package, so there’s no other configuration needed. It just… worked.

I know that some people would find the ability to do this kind of customization via “simple” text files empowering. For me it’s just a huge waste of time, and I’m not sure why there isn’t just an obvious way for systemd and Kodi to get along. I would hope somebody builds one in the future, but for now I guess I’ll leave with that.

I’m told Rust is great, where are the graphics libraries?

While I’m still a bit sour that Mozilla decided to use the same name for their language as an old project of mine (which is not a new thing for Mozilla anyway, if someone remembers the days of Phoenix and Firebird), I have been looking from the sideline as the Rust language as a way forward to replace so many applications of embedded C, with a significantly safer alternative.

I have indeed been happy to see so much UEFI work happening in Rust, because it seems to me like we came far enough that we can sacrifice some of the extreme performance of C for some safety.

But one thing that I still have not seen is a good selection of graphics libraries, and that is something that I’m fairly disappointed by. Indeed, I have been told that there are Rust bindings for the classic C graphics libraries — which is pointless, as then the part that needs safety (the parsing) is still performed in C!

The reason why I’m angry about this is that I still have one project, unpaper, which I inherited as a big chunk of C and could definitely be rewritten into a safer language. But I would rather not do so in a higher level language like Python due to the already slow floating point calculations and huge memory usage.

Right now, unpaper is using libav, or ffmpeg, or something with their interface, depending on how much they fought this year. This is painful, but given that each graphic library implements interfaces in different ways, I couldn’t find a better and safe way to implement graphics processing. I was hoping that with all the focus on Rust out there, particularly from Mozilla, implementing graphics parsing libraries would be high in the list of priorities.

I think it’s librsvg that was ported to Rust — which was probably a great idea to prioritize, given it is exactly the type of format where C performs very poorly: string parsing. But I’m surprised nobody tried to make an API-compatible libpng or libtiff. It sounds to me like Rust is the perfect language for this type of work.

At any rate, if anyone finally decides to implement a generic graphic file input/output library, with at least support for TIFF, PNM and PNG, I’d love to know. And after that I would be happy to port unpaper to it — or if someone wants to take unpaper code as the basis to reimplement it as a proof of concept, that’d be awesome.

The problem for a lot of these libraries is that you have to maintain support for a long list of quirks and extensions that over time piled up on the formats. And while you can easily write tests to maintain bit-wise compatibility with the “original” C language based libraries for what concerns bitmap rendering (even for non-bitmap graphics such as JPEG and WebP), there are more things that are not obvious to implement, such as colour profiles, and metadata in general.

Actually, I think that there is a lot of space here to build up a standard set of libraries for graphics libraries and metadata, since there’s at least some overlapping between these, and having a bigger group of people working on separate, but API-similar libraries for various graphic formats would be a significant advantage for Rust over other languages.

Opinion: FinTech vs High Street

If you’re a regular reader of this blog, you may have noticed that I have strong opinions regarding consumer financial services, particularly when it comes to Revolut, which I wrote about a lot by now.

I didn’t start writing about these services because of a professional interest, but rather because when I moved from Italy to Dublin (via Los Angeles), I felt like I stepped back ten or more years with the banking system. And while this improved significantly when I moved to London, there are still a few things baffling me from time to time.

But as I discussed in one of my recent Revolut-bashing posts, compared to Ireland the high street banking options in London are so much more interesting that I’ve effectively ditched Revolut for day-to-day payments. So why would anyone care about FinTech products?

I have been thinking this for a while, not just as a customer, but with an awareness that, if I decided to change my perspective in life and go for a riskier professional position, from my rather cushy one, FinTech appears to be the place to be right now. Particularly given the unfortunate experience I have gained in this field by now.

One of the issues appears to be one of branding, and trust. Quite a few people appear to have a dislike for high street banks because of their association with previous scandals or news. And that’s what makes it funny to see how high street banks appear to just want to enter the market with new brands.

Another thing that Monzo appears to capitalize on, in their tube advertisements, is the ability to receive instant notification of the money spent. And that’s something that I deifnitely can relate to. This is particularly important when you get to more shady stores, or to coffee stores with untrained staff, that may suggest that a transaction didn’t really go through, and suggest you to pay cash instead, charging you twice.

Indeed, this was one of the biggest advantages of using Revolut for me in Ireland. The “famous” Tesco Bank credit card didn’t really have even an online banking platform, and the only way for me to confirm whether a transaction went through was by looking at my Tesco points statements. But this is not something revolutionary: I had notifications of all online transactions, and card-present transactions over €50, on my Italian pre-paid card in 2006 (via SMS, not via app at the time, of course.)

While I feel Monzo is right to take a swing to most high street banks for not implementing these notifications, even in 2019 London it’s not true that you need to “go FinTech” to have this level of support. My American Express does the same, and you cannot say that AmEx is a new player on the market!

And it doesn’t stop at just sending me notifications for the charges: American Express goes one step further, and integrates with Google Pay so that you get the notifications even without having the American Express application installed.

Indeed, I have a feeling that, for the most part, customers would be happy if the level of support in high street banking was on par with American Express:

  • Their website lets you log in with a simple username/password combination, rather than the silly security theatre of “Give me the 1st, 2nd, 123th character of your password, and 1st, 5th and 6th digit of your PIN” (seriously, setting aside the random index selection, why on Earth do you need two equivalent factors?)
  • New charges on the card are notified immediately, either through app or through Google Pay (I don’t know about Apple Pay but I assume that’s the case there as well).
  • You can get your card’s PIN online, which is usually verified by a text message OTP.

One of the things that AmEx does not do, that I think all of the FinTech players appear to do, is freezing/unfreezing the card on the fly. A feature that Barclays has been advertising all over as if they had invented it.

It is pretty much possible, or certain, that some UK high street banks already started providing all of these options, maybe in different combinations. As I said, Barclays does appear to have the ability to freeze/unfreeze the card. Fineco does not mail out the PIN but rather has you requesting it online and delivers it as text message. And as I made as a point before, Santander has a credit card with no foreign transaction fees.

Many of the articles I read over the importance to FinTech startups imply that the main reason why big banks can’t be this flexible or “innovative” is that they have old, heavy and difficult to manage backends. From second hand discussions, I can believe that the backends are indeed as heavy and clunky as they are purported to be, but it does seem to me that many of the features involved can’t be that tied to the backends, given that most of the banks can provide those features already.

A number of features that I see being deployed throughout different banks is the ability to “budget” expenses. While they sound particularly interesting, this appears to be mostly a “frontend” feature. Santander has this feature, but somehow they decided to implement this on a separate Android app only, which I gave up on. Indeed, it does not allow you to correct their classification of expenses, which makes it pretty much useless, not just because some vendors are classified completely wrong, but also because sometimes the same vendor might be used for different reasons (Boots, CVS, Walgreens, and similar all provide both medicines and groceries; how you categorize their spend depends on what you bought!)

While Santander have already won me over as a bank customer, I do feel that they would win over more of my credit card expenses from American Express if they implemented “this one weird trick” of informing me of charges as they happen. Because small things like that are one of the reasons I use my AmEx quite a lot in the UK, even after I reach the needed spend to upgrade my Marriott membership to gold.

So yeah, my hope is that high street banks will finally see the competition from FinTech as a list of features that they should, opportunistically, implement, rather than an excuse for the branding and marketing departments to come up with new ideas to be “hip”.

“Planets” in the World of Cloud

As I have written recently, I’m trying to reduce the amount of servers I directly manage, as it’s getting annoying and, honestly, out of touch with what my peers are doing right now. I already hired another company to run the blog for me, although I do keep access to all its information at hand and can migrate where needed. I also give it a try to use Firebase Hosting for my tiny photography page, to see if it would be feasible to replace my homepage with that.

But one of the things that I still definitely need a server for is keep running Planet Multimedia, despite its tiny userbase and dwindling content (if you work in FLOSS multimedia, and you want to be added to the Planet, drop me an email!)

Right now, the Planet is maintained through rawdog, which is a Python script that works locally with no database. This is great to run on a vserver, but in a word where most of the investments and improvements go on Cloud services, that’s not really viable as an option. And to be honest, the fact that this is still using Python 2 worries me no little, particularly when the author insists that Python 3 is a different language (it isn’t).

So, I’m now in the market to replace the Planet Multimedia backend with something that is “Cloud native” — that is, designed to be run on some cloud, and possibly lightweight. I don’t really want to start dealing with Kubernetes, running my own PostgreSQL instances, or setting up Apache. I really would like something that looks more like the redirector I blogged about before, or like the stuff I deal with for a living at work. Because it is 2019.

So sketching this “on paper” very roughly, I expect such a software to be along the lines of a single binary with a configuration file, that outputs static files that are served by the web server. Kind of like rawdog, but long-running. Changing the configuration would require restarting the binary, but that’s acceptable. No database access is really needed, as caching can be maintained to process level — although that would men that permanent redirects couldn’t be rewritten in the configuration. So maybe some configuration database would help, but it seems most clouds support some simple unstructured data storage that would solve that particular problem.

From experience with work, I would expect the long running binary to be itself a webapp, so that you can either inspect (read-only) what’s going on, or make changes to the database configuration with it. And it should probably have independent parallel execution of fetchers for the various feeds, that then store the received content into a shared (in-memory only) structure, that is used by the generation routine to produce the output files. It may sounds like over-engineering the problem, but that’s a bit of a given for me, nowadays.

To be fair, the part that makes me more uneasy of all is authentication, but Identity-Aware Proxy might be a good solution for this. I have not looked into that but used something similar at work.

I’m explicitly ignoring the serving-side problem: serving static files is a problem that has mostly been solved, and I think all cloud providers have some service that allows you to do that.

I’m not sure if I will be able to work more on this, rather than just providing a sketched-out idea. If anyone knows of something like this already, or feels like giving a try to building this, I’d be happy to help (employer-permitting of course). Otherwise, if I find some time to builds stuff like this, I’ll try to get it released as open-source, to build upon.

Introducing usbmon-tools

A couple of weeks ago I wrote some notes about my work in progress to implement usbmon captures handling code, and pre-announced I was going to publish more of my extraction/inspection scripts.

The good news is that the project is now released, and you can find it on GitHub as usbmon-tools with an Apache 2.0 license, and open to contributions (with a CLA, sorry about that part). This is the first open source project I release using my employer’s releasing process (for other projects, I used the IARC process instead), and I have to say I’m fairly pleased with the results.

This blog post is meant mostly as a way to explain what’s going on my head regarding this project, with the hope that contributors can help it become reality. Or that they can contribute other ideas to it, even when they are not part of my particular plans.

I want to start with a consideration on the choice of language. usbmon-tools is written in Python 3. And in particular it is restricted to Python 3.7, because I wanted to have access to type annotations, which I found extremely addictive at work. I even set up Travis CI to run mypy as part of the integration tests for the repository.

For other projects I tend to be more conservative, and wait for Debian stable to have a certain version before requiring that as a minimum, but as this is a toolset for developers primarily, I’m going to expect its public to be able to deal with Python 3.7 as the requirement. This version was released nearly a year ago, and that should be plenty of time for people to have one at hand.

As for what the project should achieve in my view, is an easy way for developers to dissect an USB snooping trace. I started by building a simplistic tool that recreates a text format trace from the pcapng file, based on the official documentation of usbmon in the kernel (I have some patches to improve on that, too, but that probably will become a post in by itself next week). It’s missing isochronous support, and it’s not totally tested, but it at least gave me a few important insight on the format itself, including the big caveat that the “id” (or tag) of the URBs is not unique.

Indeed, I think that alone is one of the most important pieces of the puzzle in the library: in addition to parsing the pcapng file itself, the library can re-tag the events so that they get a real unique identifier (UUID), making it significantly easier to analyze the traces.

My next steps on the project are to write a more generic tool to convert a USB capture into what I call my “chatter format” (similar to the one I used to discuss serial protocols), and a more specific one that converts HID traces (because HID is a more defined protocol, and we can go a level deeper in exposing this into a human-readable source). I’m also considering if it would be within reach to provide the tool a HID descriptor blob, parse it and have it used to parse the HID traffic based on it. It would make some debugging particularly easier, for instance the stuff I did when I was fixing the ELECOM DEFT trackball.

I would also love to be able to play with a trace in a more interactive manner, for instance by loading this into Jupyter notebook, so that I could try parsing the blobs interactively, but unless someone with more experience with those contributes the code, I don’t expect I’ll have much time for it.

Pull requests are more than welcome!