Was Acronis True Image 2020 a mistake?

You may remember that a few months ago I complained about Acronis True Image 2020. I have since been mostly happy with the software, despite it being still fairly slow when uploading a sizable amount of changed files, such as after shooting a bunch of pictures at home. This would have been significantly more noticeable if we had actually left the country since I started using it, as I usually shoot at least 32GB of new pictures on a trip (and sometimes twice as much), but with lockdown and all, it didn’t really happen.

But, beside for that, the software worked well enough. Backup happened regularly, both on the external drive and the Cloud options, and I felt generally safe with using it. Until a couple of weeks ago, when suddenly it stopped working, and failed with Connection Timeout errors. They didn’t correlate with anything: I did upgrade to Windows 10 20H1, but that was a couple of weeks before, and backups went through fine until then. There was no change in network, there was no change from my ISP, and so on.

So what gives? None of the tools available from Acronis reported errors, ports were not marked as blocked, and I was running the last version of everything. I filed a ticket, was called on the phone by one of their support people who actually seemed to know what he was doing — TeamViewer at hand, he checked once again for connectivity, and once again found that everything is alright, the only thing he found to change was disabling the True Image Mounter service, which is used to get quick access to the image files, and thus is not involved in the backup process. I had to disable tha tone because, years after Microsoft introducing WSL, enabling it breaks WSL filesystem access altogether, so you can’t actually install any Linux distro, change passwords in the ones you already installed, or run apt update on Debian.

This was a week ago. In the meantime support asked me to scan the disks for errors because their system report reported one of the partitions as having issues (if I read their log correctly, that’s one of the recovery images so it’s not at all related to the backup), and the more recent one to give them a Process Monitor log while running the backup. Since they don’t actually give you a list of process to limit to, I ended up having to kill most of the other running application to take the log, as I didn’t want to leak more information that I was required to. It still provided a lot of information I’m not totally comfortable with having provided. And I still have no answer, at the time of writing.

It’s not all here — the way you provide all these details to them is a fairly clunky: you can’t just mail them, or attach them through their web support interface, as even their (compressed) system report is more than 25MB for my system. Instead what they instruct you to do is to take the compressed files and uploaded them through FTP with the username/password pair they provide to you.

Let me repeat that. You upload compressed files, that include at the very least most of the filenames you’re backing up, and possibly even more details of your computer, with FTP. Unencrypted. Not SFTP, not FTPS, not HTTPS. FTP. In 2020.

This is probably the part that makes my blood boil. Acronis has clearly figured out that the easiest way for people to get support is to use something that they can use very quickly. Indeed you can still put an FTP URL In the location bar of your Windows 10 File Explorer, and it will allow you to upload and download files over it. But it does that in a totally unencrypted, plain-text manner. I wonder how much more complicated it would be to use at least FTPS, or to have an inbound-only password-protected file upload system, like Google Drive or Dropbox, after all they are a cloud storage solution provider!

As for myself, I found a temporary workaround waiting for the support folks to figure out what they likely have screwed up on their London datacenter: I’m backing up my Lightroom pictures to the datacenter they provide in Germany. It took three days to complete, but it at least gives me peace of mind that, if something goes horribly wrong, at least the most dear part of my backup is saved somewhere else.

And honestly, using a different backup policy than the rest of the system just for the photos is probably a good idea: I set it to “continuous backup”, because generally speaking it usually stays the same all the time, until I go and prepare another set to publish, then a lot of things change quickly and then nothing until the next time I can do it.

Also, I do have the local backup — that part is still working perfectly fine. I might actually want to use it soon, as I’m of two minds between trying to copy over my main OS drive from a 1TB SSD to a 2TB SSD, and just getting a 2TB SSD, and installing everything anew onto it. If I do go that route, I also will reuse the 1TB SSD onto my NUC instead, which right now is running with half SATA and half NVMe storage.

Conclusions? Well, compared to the Amazon Glacier + FastGlacier (that has not been updated in just over two years now, and still sports a Google+ logo and +1 button!), it’s still good value for money. I’m spending a fraction of what I used to spend with Amazon, and even in the half-broken state it’s backing up more data and has significantly faster access. The fact that you can set different policies for different parts of the backup is also a significant plus. I just wish there was a way to go from a “From Folders to Cloud” backup to a tiered “From Folders to External, plus Cloud” — or maybe I’ll bite the bullet and, if it’s really this broken, just go and re-configure also the Lightroom backup to use the tiered option.

But Acronis, consider cleaning up your support act. It’s 2020, you can’t expect your customers to throw you all their information via unencrypted protocols, for safety’s sake!

Update 2020-06-30: the case is now being escalated to the “development and cloud department” — and if this is at all in the same ballpark as the companies I worked for it means that something is totally messed up in their datacenter connectivity and I’m the first one to notice enough to report to them. We’ll see.

Update 2020-07-16: well, the problem is “solved”. In the sense that after I asked them, they moved my data out of the UK (London) datacenter into the Germany one. Which works fine and has no issues. They also said before they will extend my payment to the month that I didn’t have the backup working. But yeah, turns out that nobody seems to have very clear on their side what was going on, but the UK datacenter just disappeared off my dashboard. I wonder how many had this problem.

The GPL is not an EULA

Before I get to the meat of this blog post, let me make sure nobody would misunderstand for a lawyer. What you’re about to read is a half-rant about Free Software projects distributing Windows binaries with pretty much default installer settings. It is in no way legal advice, and it is not being provided by someone with any semblance of legal training.

If you follow Foone or me on Twitter, you probably have noticed one time or another our exchanges of tweets about GPL in Windows installers.

The reason for this annoyance is that, as Foone pointed out many times in the past, licenses such as GPL and MIT are not EULAs: End-User License Agreements. They are, by design, licensing the distribution of the software, rather than its use. Now, in 2020 a lot of people are questioning this choice, but that’s a different topic altogether.

What this means for a consumer is that you are not required to agree to the GPL (or LGPL, or MIT) to install and use a piece of software. You’re required to agree to it if you decide to redistribute it. And as such the install wizards’ license dialogs, with their “I accept the terms” checkboxes are pretty much pointless. And an annoyance because you need to actually figure out where to click instead of keep clicking “Next” — yes I realise that the tiniest violin may be playing at that annoyance, but that’s not the point.

Indeed, the point why I make fun of these installers is because, at least to me, they show the cultural mark of proprietary software on Windows, and the general lack of interest in the politics of Free Software from pretty much everybody involved. The reason why the installers default to saying “EULA” and insisting on you to agree to it, is because non-Free Software on Windows usually does have EULAs. And even the FLOSS installer frameworks ended up implementing the same pattern.

VLC, unsurprisingly, cares about Free Software ideals and its politics, and went out of its way many years ago to make sure that the license is correctly shown in its installer. For a few other projects, I sent patches myself to correct them, whenever I can. For others… well, it’s complicated. The WiX installer toolkit was released years ago by Microsoft as open source, and is used by Calibre and Mono among others, but it seems like the only way to have it show a non-EULA screen is to copy one of its built-in dialogs and edit it.

As I said recently on Twitter, we need a reference website, with instructions on how to correctly display non-EULA Free Software licenses on Windows (and any other operating system). Unfortunately I don’t have time to go through the releasing process as I’m about to leave the company in a few weeks. So either it’ll have to wait another couple of weeks (when I’m free from those obligations), or be started by someone else.

Until then, I guess I’ll provide this blog post as a reference to anyone who asks me why am I even complaining about those licenses.

Windows Backup Solutions: trying out Acronis True Image Backup 2020

One of my computers is my Gamestation, which to be honest has not ran a game in many months now, which runs Windows out of necessity, but also because honestly sometimes I just need something that works out of the box. The main usage of that computer nowadays is Lightroom and Photoshop for my photography hobby.

Because of the photography usage, backups are a huge concern to me (particularly after movers stole my previous gamestation), and so I have been using a Windows tool called FastGlacier to store a copy of most of the important stuff to Amazon Glacier service, in addition to letting Windows 10 do its FileHistory magic on an external hard drive. Not a cheap option, but (I thought) a safe and stable one. Unfortunately the software appears to not being developed anymore, and with one of the more recent Windows 10 updates it stopped working (and since I had set it up as a scheduled operation, it failed silently, which is the worst thing that can happen!)

My original plan for last week (at the time of writing), was to work on pictures, as I have shots from trip over three years ago that I have still not wandered through, rather than working on reverse engineering. But when I noticed the lacking backups, I decided to put that on hold until the backup problem was solved. The first problem was finding a backup solution that would actually work, and that wouldn’t cost an arm and a leg. The second problem was that of course most of the people I know are tinkerers that like rube-goldberg solutions such as using rclone on Windows with the task scheduler (no thanks, that’s how I failed the Glacier backups).

I didn’t have particularly high requirements: I wanted a backup solution that would do both local and cloud backups — because Microsoft has been reducing the featureset of their FileHistory solution, and so relying on it feels a bit flaky. And the ability to store more than a couple of terabytes on the cloud solution (I have over 1TB of RAW shots!), even at a premium. I was not too picky on price, as I know features and storage are expensive. And I wanted something that would just work out of the box. A few review reads later, I found myself trying Acronis True Image Backup. A week later, I regret it.

I guess my best lesson learnt from this is that Daniel is right, and it’s not just about VPNs: most review sites seem to be scoring higher the software they get more money from via affiliate links (you’ll notice that in this blog post there won’t be any!) So while a number of sites had great words for Acronis’s software, I found it sufficiently lacking that I’m ranting about it here.

So what’s going on with the Acronis software? First of all, while it does support both “full image” and “selected folders” modes, you need to be definitely aware that the backup is not usable as-is: you need the software to recover the data. Which is why it comes with bootable media, “survival kits”, and similar amenities. This is not a huge deal to me, but it’s still a bit annoying, when FileHistory used to allow direct access to the files. It also locks you in in accessing the backup with the software, although Acronis makes the restore option available even after you let your subscription expire, which is at least honest.

Then the next thing that was clear to me was that the speed of the cloud backup is not the strongest suit of Acronis. The original estimate for backing up the 2.2TB of data that I expected to back up was on the mark at nearly six days. To be fair to Acronis, the process went extremely smoothly, it never got caught up, looped, crashed, or slowed down. The estimate was very accurate, and indeed, running this for about 144 hours was enough to have the full data backed up. Their backup status also shows the average speed of the processes, that matched my estimate while the backup was running, of 50Mbps.

The speed is the first focus of my regret. 50Mbps is not terribly slow, and for most people this might be enough to saturate their Internet uplink. But not for me. At home, my line is provided by Hyperoptic, with a 1Gbps line that can sustain at least 900Mbps upload. So seeing the backup bottlenecked by this was more than a bit annoying. And as far as I can tell, there’s no documentation of this limit on the Acronis website at the time of writing.

When I complained on Twitter about this, it was mostly in frustration for having to wait, but I was considering the 50Mbps speed at least reasonable (although I would have considered paying a premium for faster uploads!) the replies I got from support have gotten me more upset than before. Their Twitter support people insisted that the problem was with my ISP and sent me to their knowledgebase article on using the “Acronis Cloud Connection Verification Tool” — except that following the instruction showed I was supposed to be using their “EU4” datacenter, for which there is no tool. I was then advise to file a ticket about it. Since then, I appear to have moved back to “EU3” — maybe EU4 was not ready yet.

The reply to the ticket was even more of an absurdist mess. Beside a lot of words to explain “speed is not our fault, your ISP may be limiting your upload” (fair, but I already noted to them that I knew that was not the case), one of the steps they request you to follow is to go to one of their speedtest apps — which returns a 504 error from nginx, oops! Oh yeah and you need to upload the logs via FTP. In 2020. Maybe I should call up Foone to help. (Windows 10, as it happens, still supports FTP write-access via File Explorer, but it’s not very discoverable.)

Both support people also kept reminding me that the backup is incremental. So after the first cloud backup, everything else should be a relatively small amount of data to be copied. Except that I’m not sold onto that either, still: 128GB of data (which is the amount of pictures I came back from Budapest with), would take nearly six hours to back up.

When I finally managed to get a reply that was not directly from a support script, they told me to run the speedtest on a different datacenter, EU2. As it turns out, this is their “Germany” datacenter. This was very clear by tracerouting the IP addresses for the two hosts: EU3 is connected directly to LINX, EU2 goes back to AMS, then FRA (Frankfurt). The speedtest came out fairly reasonable (around 250Mbps download, 220Mbps upload), so I shared the data they requested in the ticket… and then wondered.

Since you can’t change the datacenter you backup to once you started a backup, I tried something different: I used their “Archive” feature, and tried to archive a multi-gigabyte file, but to their Germany datacenter, rather than the United Kingdom one (against their recommendation of «select the country that is nearest to your current location»). Instead of a 50Mbps peak, I got a 90Mbps peak, with a sustained of 67Mbps. Now this is still not particularly impressive, but it would have cut down the six days to three, and the five hours to around two. And clearly it sounds like their EU3 datacenter is… not good.

Anyway, let’s move on and look at local backups, which Acronis is supposed to take care of by itself. For this one at first I wanted to use the full image backup, rather than selecting folders like I did for the cloud copy, since it would be much cheaper, and I have a 9T external harddrive anyway… and when you do that, Acronis also suggests you to create what they call the “Acronis Survival Kit” — which basically means turning the external hard drive bootable, so that you can start up and restore the image straight from it.

The first time I tried setting it up that way, it formatted the drive, but it didn’t even manage to get Windows to connect the new filesystem. I got an error message linking me to a knowledgebase article that… did not exist. This is more than a bit annoying, but I decided to run a full SMART check on the drive to be safe (no error to be found), and then try again after a reboot. Then it finally seemed to work, but here’s where things got even more hairy.

You see, I’ve been wanting to use my 9TB external drive for the backup. A full image of my system was estimated at 2.6TB. But after the Acronis Survival Kit got created, the amount of space available for the backup on that disk was… 2TB. Why? It turned out that the Kit creation caused the disk to be repartitioned as MBR, rather than the more modern GPT. And in MBR you can’t have a (boot) partition bigger than 2TB. Which means that the creation of the Survival Kit silently decreased my available space to nearly 1/5th!

The reply from Acronis on Twitter? According to them my Windows 10 was started in “BIOS mode”. Except it didn’t. It’s set up with UEFI and Secure Boot. And unfortunately it doesn’t seem like there’s an easy way to figure out why the Acronis software thinks it’s that way. But worse than that, the knowledgebase article says that I should have gotten a warning, which I never did.

So what is it going to be at the end of the day? I tested the restore from Acronis Cloud, and it works fine. Acronis has been in business for many years, so I don’t expect them to disappear next year. So the likeliness of me losing access to these backups is fairly low. I think I may just stick to them for the time being, and hope that someone in the Acronis engineering or product management teams can read this feedback and think about that speed issue, and maybe start considering the idea of asking support people to refrain from engaging with other engineers on Twitter with fairly ridiculous scripts.

But to paraphrase a recent video by Techmoan, these are the type of imperfections (particularly the mis-detected “BIOS booting” and the phantom warning), that I could excuse to a £50 software package, but that are much harder to excuse in a £150/yr subscription!

Any suggestions for good alternatives to this would be welcome, particularly before next year, when I might reconsider if this was good enough for me, or a new service is needed. Suggestions that involve scripts, NAS, rclone, task scheduling, self-hosted software will be marked as spam.

Amazon, Project Gutenberg, and Italian Literature

This post starts in the strangest of places. The other night, my mother was complaining how few free Italian books are available on the Kindle Store.

Turns out, a friend of the family, who also has a Kindle, has been enjoying reading free English older books on hers. As my mother does not speak or read English, she’d been complaining that the same is not possible in Italian.

The books she’s referring to are older books, the copyright of which expired, and that are available on Project Gutenberg. Indeed, the selection of Italian books on that site is fairly limited, and it is something that I have indeed been sadden about before.

What has Project Gutenberg to do with Kindle? Well, Amazon appears to collect books from Project Gutenberg, convert them to Kindle’s native format, and “sell” them on the Kindle Store. I say “sell” because for the most part, these are available at $0.00, and are thus available for free.

While there is no reference to Project Gutenberg on their store pages, there’s usually a note on the book:

This book was converted from its physical edition to the digital format by a community of volunteers. You may find it for free on the web. Purchase of the Kindle edition includes wireless delivery.

Another important point is that (again, for the most part), the original language editions are also available! This is how I started reading Jules Verne’s Le Tour du monde en quatre-vingts jours while trying to brush up my French to workable levels.

Having these works available on the Kindle Store, free of both direct cost and delivery charge, is in my opinion a great step to distribute knowledge and culture. As my nephews (blood-related and otherwise) start reaching reading age, I’m sure that what I will give them as presents is going to be Kindle readers, because between having access to this wide range of free books, and the embedded touch-on dictionary, they feel like something I’d have thoroughly enjoyed using when I was a kid myself.

Unfortunately, this is not all roses. the Kindle Store still georestrict some books, so from my Kindle Store (which is set in the US), I cannot download Ludovico Ariosto’s Orlando Furioso in Italian (though I can download the translation for free, or buy for $0.99 a non-Project Gutenberg version of the original Italian text). And of course there is the problem of coverage for the various languages.

Italian, as I said, appears to be a pretty bad one when it comes to coverage. If I look at Luigi Pirandello’s books there are only seven entries, one of which is in English, and another one being a duplicate. Compare this with the actual list of his works and you can see that it’s very lacking. And since Pirandello died in 1936, his works are already in the public domain.

Since I have not actually being active with Project Gutenberg, I only have second hand knowledge of why this type of problem happens. One of the thing I remember having been told about this, is that most of the books you buy in Italian stores are either annotated editions, or updated for modern Italian, which causes their copyright to be extended do the death of the editor, annotator or translator.

This lack of access to Italian literature is a big bother, and quite a bit of a showstopper to giving a Kindle to my Italian “nephews”. I really wish I could find a way to fix the problem, whether it is by technical or political means.

On the political side, one could expect that, with the focus on culture of the previous Italian government, and the focus of the current government on the free-as-in-beer options, it would be easy to convince them to release all of the Italian literature that is in the public domain for free. Unfortunately, I wouldn’t even know where to start to ask them to do that.

On the technical side, maybe it is well due time that I spend a significant amount of time on my now seven years old project of extracting a copy of the data from the data files of Zanichelli’s Italian literature software (likely developed at least in part with public funds).

The software was developed for Windows 3.1 and can’t be run on any modern computer. I should probably send the ISOs of it to the Internet Archive, and they may be able to keep it running there on DosBox with a real copy of Windows 3.1, since Wine appears to not support the 16-bit OLE interfaces that the software depends on.

If you wonder what would be a neat thing for Microsoft to release as open-source, I would probably suggest the whole Windows 3.1 source code would be a starting point. If nothing else, with the right license it would be possible to replace the half-complete 16-bit DLLs of Wine with official, or nearly-official copies.

I guess it’s time to learn more about Windows 3.1 in my “copious spare time” (h/t Charles Stross), and start digging into this. Maybe Ryan’s 2ine might help, as OS/2 and Windows 3.1 are closer than the latter is to modern Windows.

New, new gamestation

Full disclosure: the following posts has a significantly higher amount of Amazon Affiliates links than usual. That’s because I am talking about the hardware I just bought, and this post counts just as much as an update on my hardware as a recommendation on the stuff I bought, I have not gotten hacked or bought out by anyone.

As I noted in my previous quick update, my gamestation went missing in the move. I would even go as far as to say that it was stolen, but I have no way to prove whether it was stolen by the movers or during the move. This meant that I needed to get a new computer for my photo editing hobby, which meant more money spent, and still no news from the insurance. But oh well.

As for two years ago, I wanted to post here the current hardware setup I have. You’ll notice a number of similarities with the previous configuration, because I decided to stick as much as possible to what I had before, that worked.

  • CPU: Intel Core i7 7820X, which still has a nice 3.6GHz base clock, and has more cores than I had before.
  • Motherboard: MSI X299 SLI PLUS. You may remember that I had problems with the ASUS motherboard.
  • Memory: 8×Crucial 16GB DDR4.
  • Case: Fractal Design Define S, as I really like the designs of Fractal Design (pun not intended), and I do not need the full cage or the optical disk bays for sure this time around.
  • CPU cooler: NZXT Kraken X52, because the 280mm version appears to be more aimed towards extreme overclockers than my normal usage; this way I had more leeway on how to mount the radiator in.
  • SSD: 2×Crucial MX300 M.2 SATA. While I liked the Samsung 850 EVO, the performance of the MX300 appear to be effectively the same, and this allowed me to get the M2 version, leaving more space if I need to extend this further.
  • HDD: Toshiba X300 5TB because there is still need for spinning rust to archive data that is “at rest”.
  • GPU: Zotac GeForce GTX 1080Ti 11GB, because since I’m spending money I may just as well buy a top of the line card and be done with it.
  • PSU: Corsair RM850i, for the first time in years betraying beQuiet! as they didn’t have anything in stock at the time I ordered this.

This is the configuration in the chassis, but that ended up not being enough. In particular, because of my own stupidity, I ended up having to replace my beloved Dell U2711 monitor. I really like my UltraSharp, but earlier this year I ended up damaging the DisplayPort input on it — friends don’t let friends use DisplayPort with hooks on them! Get those without for extra safety, particularly if you have monitor arms or standing desks! Because of this I have been using a DVI-D DualLink cable instead. Unfortunately my new videocard (and most new videocard I could see) do not have DVI ports anymore, preferring instead multiple DisplayPort and (not even always) HDMI output. The UltraSharp, unfortunately, does not support 2560×1440 output over HDMI, and the DisplayPort-to-DVI adapter in the box is only for SingleLink DVI, which is not fast enough for that resolution either. DualLink DVI adapters exist, but for the most part they are “active” converters that require a power supply and more cables, and are not cheap (I have seen a “cheap” one for £150!)

I ended up buying a new monitor too, and I settled for the BenQ BL2711U, a 27 inches, 4k, 10-bit monitor “for designers” that boasts a 100% sRGB coverage. This is not my first BenQ monitor; a few months ago I bought a BenQ BL2420PT, a 24 inches monitor “for designers” that I use for both my XPS and for my work laptop, switching one and the other as needed over USB-C, and I have been pretty happy with it altogether.

Unfortunately the monitor came with DisplayPort cables with hooks, once again, so at first I decided to connect it over HDMI instead. And that was a big mistake, for multiple reasons. The first is that calibrating it with the ColorMunki was showing a huge gap between the colours uncalibrated and calibrated. The second was that, when I went to look into it, I could not enable 10-bit (10 bpc) mode in the NVIDIA display settings.

Repeat after me: if you want to use a BL-series BenQ monitor for photography you should connect it using DisplayPort.

The two problems were solved after switching to DisplayPort (temporarily with hooks, and ordered a proper cable already): 10bpc mode is not available over HDMI when using 4k resolution and 60Hz. HDMI 2 can do 4k and 10-bit (HDR) but only at lower framerate, which makes it fine for watching HDR movies and streaming, but not good for photo editing. The problem with the calibration was the same problem I noticed, but couldn’t be bothered figuring out how to fix, on my laptops: some of the gray highlighting of text would not actually be visible. For whatever reason, BenQ’s “designer” monitors ship with the HDMI colour range set to limited (16-235) rather than full (0-255). Why did they do that? I have no idea. Indeed switching the monitor to sRGB mode, full range, made the calibration effectively unnecessary (I still calibrated it out of nitpickery), and switching to DisplayPort removes the whole question on whether it should use limited or full range.

While the BenQ monitors have fairly decent integrated speakers, which make it unnecessary to have a soundbar for hearing system notifications or chatting with my mother, that is not the greatest option to play games on. So I ended up getting a pair of Bose Companion 2 speakers which are more than enough for what I need to use them for.

Now I have an overly powerful computer, and a very nice looking monitor. How do I connect them to the Internet? Well, here’s the problem: the Hyperoptic socket is in the living room, way too far from my computer to be useful. I could have just put a random WiFi adapter on it, but I also needed a new router anyway, since the box with my fairly new Linksys also got lost in the moving process.

So upon suggestion from a friend, and a recommendation from Troy Hunt I ended up getting a UAP-AC-PRO for the living room, and a UAP-AC-LITE for the home office, topped it with an EdgeRouter X (the recommendation of which was rescinded afterwards, but it seems to do its job for now), and set them as a bridge between the two locations. I think I should write down networking notes later, but Troy did that already so why bother?

So at the end of this whole thing I spent way more money on hardware than I planned to, I got myself a very new nice computer, and I have way too many extra cables than I need, plus the whole set of odds and ends of the old computer, router and scanner that are no longer useful (I still have the antennas for the router, and the power supply for the scanner). And I’m still short of the document scanner, which is a bit of a pain because I now have a collection of documents that need scanning. I could use the office’s scanners, but those don’t run OCR for the documents, and I have not seen anything decent to apply OCR to PDFs after the fact, I’m open to suggestions as I’m not sure I’m keen on ending up buying something like the EPSON DS-310 just for the duplex scanning and the OCR software.

glucometerutils news: many more meters, easier usage and Windows support

You probably have noticed by now that I write about glucometers quite a bit, not only reviewing them as an user, but also reverse engineering to figure out their protocols. This all started four years ago when I needed to send my glucometer readings to my doctor and I ended up having to write my own tool.

That tool started almost as a joke, particularly given I wrote it in Python, which at the time I was not an expert in at all (I have since learnt a lot more about it, and at work I got to be more of an expert than I’d ever expected to be). But I always known that it would be for the most part just a proof of concept. Not only exporting CSV is mostly useless, but the most important part in diabetes management software is the analysis and I don’t have any clue how to do analysis.

At first I thought I could reuse some of the implementation to expand Xavier’s OpenGlucose but it turned out that it’s not really easy for those meters that are using serial adapters or other USB devices beside the HID ones that he implemented already. Of course this does mean it would probably work fine for things like the FreeStyle Libre which I appear to have written the only Linux software to download from, but even in that case, things are more complicated.

Indeed, as I have noted here and there previously, we need a better format to export glucometer data, and in particular the data from continuous or mixed meters like the Libre. My current out format for it only includes the raw glucose readings from the meter that are not marked as errors; it does provide an unstructured text comment that tells you whether the reading is coming from the background sensor, an explicit scan or a blood sample, but it does not provide all the level of details of the original readings. And it does not expose ketone readings at all, despite the fact that most of the FreeStyle-line devices support them and I even documented how to get them. But this is a topic for a different post, I think.

On the other hand, over the past four years, the number of meters increased significantly, and I even have a few more that I only have partially reversed and not published yet. Currently there are 9 drivers, covering over a dozen meters (some meters share the same driver, either because they are just rebranded versions or simply because they share the same protocol). One is for the InsuLinx, which also loses a bunch of details, and is based off Xavier’s own reverse engineering — I did that mostly because all the modern FreeStyle devices appear to share the same basic protocol, and so writing new drivers for them is actually fairly trivial.

This would make the project an interesting base if someone feels like writing a proper UI for it. If I ever tried to look into that, I may end up just starting an HTTP server and provide everything over HMTL for the browser to render. After all that’s actually how OpenGlucose is doing things, except there is no server, and the browser is embedded. Alternatively one could just write an HTML report file out, the same way Accu-Chek Mobile does using data URLs and JavaScript bundles.

One of the most important usability changes I have added recently, though, is allowing the user not to specify the device path. When I started writing the tool, I started by looking at serial adapter based devices, which usually come with their own cable, and you just access it. The next driver was for the LBA-over-SCSI used int he OneTouch Verio, which I could have auto-detected but didn’t, and the following ones, mostly based off HID, I just expected to be given an hidraw path.

But all of this is difficult, and indeed I had more than a few people asking me which device are they meant to use, so over the past few months I adapter the drivers to try auto-detecting the devices. For the serial port based meters, the auto-detection targets the original manufacturer’s cable, so if you have a custom one, you should still pass them a path. For HID based devices, you also need the Python hidapi library because I couldn’t bother writing my own HID bus parsing on Linux…

… and the library also brings another important feature: it works on non-Linux operating systems. Indeed I now have not one but two confirmed users that managed to use the tool on Windows, for two separate FreeStyle devices (at the time of writing, the only ones implementing HID-based protocols, although I have another one in the pipeline.

Supposedly, all of this should work fine on macOS (I almost called it OS X), though the one person who contacted me trying to have it working there has been having trouble with it — I think the problem has something to do with the Python version available (I’m targetting Python 3 because I had a very hard time to do the right processing with Python 2.7). So if you want to give it a try feel free.

And yes, I’m very happy to receive pull request, so if you want to implement that HTML output I talked above about, you’re awesome and I’m looking forward to your pull request. I’m afraid I won’t be much help with the visualisation though.

Playing old games

I already have a proper, beefy gamestation which I use to play games the few days a year I spend at home. It’s there for games like Fallout 4, Skyrim and the lot where actual processing power is needed. I also use it for my photo editing, since I ended up accepting that Adobe tools are actually superior (particularly in long-term compatibility support) to anything I could find open-source.

On the other hand, I spend a significant amount of time “on the road”, as they say, travelling for conferences, or meeting my supported development teams, or just trying to get some time for myself, playing Ingress, or whatever else. The guy who was so scared of flying is now clearly a frequent flyer and one that likes seeing confs and cons.

This means that I spend a significant amount of time in a hotel room, without my gamestation and with some will to play games. Particularly when I’m somewhere for work, and so not spending the evenings out with friends — I do that sometimes when I’m out for work too, but not always. I have for a very long while spent the hotel time writing blog posts, but since the blog went down I didn’t (and even now, because of what I chose to use, it’s going to be awkward since it ends up requiring SSH access to post.) After that I spent some of the time by effectively working overtime, writing design docs and figuring out work-related problems; this is not great, not only because it leaves me with a horrible work/life balance, but also because I wouldn’t want to give the impression to my colleagues that this is something we need to do, particularly those who joined after me.

So on my last US trip, back in April, I was thinking of what I could actually play during my stay. Games on mobile and tablet are… not quite satisfying pretty quickly. I used to have a PSP (I didn’t bring it with me), but except for Monster Hunter Freedom, most of the games I’ve played have been JRPGs — I was considering getting myself a PlayStation Vita so that I could play Tales of Heart R but then I thought against it, because seriously, the Vita platform clearly failed a long time ago. I briefly considered the latest iteration of Nintendo’s portable (remember this is before they announced the Switch) but I also thought against it because I simply don’t like their form factor.

I settled on getting myself an Ideapad 100S, a very cheap, Windows laptop, and a random HP bluetooth mouse, total damage, less than €200. This is a very underpowered device if you want to use it for anything at all, including browsing the net, but the reason why I bought it was actually much simpler: it is powerful enough to play games such as Caesar 3, Pharaoh, The Settlers IV and so on. And while I may have taken a not very ethical approach to these back in the days, these games are easily, and legally, available on gog.com.

While they are not ported to Linux, some of them do play on Wine, on the other hand I did not want to spend time trying to get them to work on my Linux laptop because I want to play to relax, not to get even more aggravated when things stopped working. So instead I play them on that otherwise terrible laptop.

I actually did not play on it on my last trip, that included two 12-hours flight between, respectively, Paris and Shanghai, and Tokyo and Paris, but that was because I was visiting China, and I’m trained to be paranoid, but otherwise I have been having quite a bit of luck to play this even in the economy section to play Pharaoh and company. The only game I have not managed to play on it yet is NoX, for whatever reason the screen flickers when I try to start it up. I should just try that one on Wine, I’m fairly sure it works.

I’m actually wondering how many people have been considering reimplementing these games based on the original assets; I know people have over time done that for Dune 22000 and for Total Annihilation, but I have not dared trying to figure out if anyone else tried for other games. It would definitely be interested. I have not played any RTS in a while even though I do have a copy of Age of Empires 2 HD on my gamestation; I only played a couple of deathmatch games online with friends and even that was difficult to organize, what with all of us working, and me almost always being in different timezones.

On a more technical point, the Lenovo laptop is quite interesting. It’s very low specs, but it has some hardware that is rare to find on PCs at all, particularly it comes with an SDIO-based WiFi card. I have not tried even getting Linux to run on it, but if I were bored, I’m sure it would be an interesting set of hardware devices that might or might not work correctly, on that one.

Oh well, that’s a story for another time.

Diabetes control and its tech: reverse engineering the OneTouch Verio

Bear with me — this post will start with a much longer trial-and-error phase than the previous one…

I have received the OneTouch Verio glucometer from LifeScan last year, when I noticed that my previous glucometer (the protocol of which was fully specified on their website) was getting EOL’d. I have used it for a couple of months, but as I posted that review, I was suggested a different one, so I moved on. It was for a while in the back of my mind as LifeScan refused providing the protocol for it, though.

So over the past week, after finishing the lower-hanging fruit I decided to get serious and figure out how this device worked.

First of all, unlike the older OneTouch devices I own, this device does not use a TRS (stereo-jack) serial port, instead it comes with a standard micro-A USB connector. This is nice as the previous cables needed to be requested and received before you could do anything at all with the software.

Once connected, the device appears to the operating system as a USB Mass Storage device – a thumbdrive – with a read-only FAT16 partition with a single file in it, an HTML file sending you to LifeScan’s website. This is not very useful.

My original assumption was that the software would use a knocking sequence to replace the mass storage interface with a serial one — this is what most of the GSM/3G USB modems do, which is why usb_modeswitch was created. So I fired the same USBlyzer (which by now I bought a license of, lacking a Free Software alternative for the moment) and started tracing. But not only no new devices or interfaces appeared on the Device Manager tree, I couldn’t see anything out of the ordinary in the trace.

Since at first I was testing this on a laptop that had countless services and things running (this is the device I used for the longest time to develop Windows software for customers), I then wanted to isolate the specific non-mass storage USB commands the software had to be sending to the device, so I disabled the disk device and retried… to find the software didn’t find the meter anymore.

This is when I knew things were going to get complicated (thus why I moved onto working on the Abbott device then.) The next step was to figure out what messages were the computer and meter exchanging; unfortunately USBlyzer does not have a WireShark export, so I had to make do with exporting to CSV and then reassembling the information from that. Let me just say it was not the easiest thing to do, although I now have a much more polished script to do that — it’s still terrible so I’m not sure I’m going to publish it any time soon though.

The first thing I did was extracting the URBs (USB Request Blocks) in binary form from the hex strings in the CSV. This would allow me to run strings on them, in the hope of seeing something such as the meter’s serial number. When reverse engineering an unknown glucometer protocol, it’s good to keep in mind essentially all diabetes management software relies on the meters’ serial numbers to connect the readings to a patient. As I’ve later discovered, I was onto something, but either strings is buggy or I used the wrong parameters. What I did find then was a lot of noise with MSDOS signatures (for MBR and FAT16) appearing over and over. Clearly I needed better filtering.

I’ve enhanced the parsing to figure out what the URBs meant. Turns out that USB Mass Storage uses signatures USBC and USBS (for Command and Status) – which also explained why I saw them in the Supermicro trace – so it’s not too difficult to identify them, and ignore them. Once I did that, the remaining URBs didn’t make much sense either, particularly because I could still notice they were only the data being written and read (as I could see many of them matched with blocks from the device’s content.)

So I had to dig further. USB is somewhat akin to a networking stack, with different layers of protocols one on top of the other — the main difference being that the USB descriptor (the stuff lsub -v prints) containing the information for all levels, rather than providing that information on each packet. A quick check on the device’s interface tells me indeed that it’s a fairly standard one:

Interface Descriptor:
  bLength                 9
  bDescriptorType         4
  bInterfaceNumber        0
  bAlternateSetting       0
  bNumEndpoints           2
  bInterfaceClass         8 Mass Storage
  bInterfaceSubClass      6 SCSI
  bInterfaceProtocol     80 Bulk-Only
  iInterface              7 LifeScan MSC

What this descriptor says is that the device is expecting SCSI commands, which is indeed the case of most USB thumbdrives — occasionally, a device might report itself as using the SDIO protocol, but that’s not very common. The iInterface = LifeScan MSC setting, though, says that there is an extension of the protocol that is specific to LifeScan. Once again here I thought it had to be some extension to the SCSI command set, so I went to look for the specs of the protocol, and started looking at the CDBs (command blocks.)

I’m not sure at this point if I was completely surprised not to see any special command at all. The only commands in the trace seemed to make sense at the time (INQUIRY, READ, WRITE, TEST MEDIA READY, etc). It was clear at that point that the software piggybacked the standard volume interface, but I expected it to access some hidden file to read the data, so I used an app to log the filesystem access and… nothing. The only files that were touched were the output Access files used by the tool.

I had to dig deeper, so I started parsing the full CDBs and looked at which part of the disk were accessed — I could see some scattered access to what looked like the partition table (but wasn’t it supposed to be read-only?) and some garbage at the end of the disk with System Volume Information. I dumped the content of the data read and written and used strings but couldn’t find anything useful, even looking for Unicode characters. So I took another trace, started it with the device already connected this time, and compared — that started sending me to the right direction: I could see a number of write-then-read requests happening on three particular blocks: 3, 4 and 5.

At that point I tried to focus on the sequence of writes and reads on those blocks, and things got interesting: some of the written and read data had the same content across sessions, which meant there was communication going on. The device is essentially exposing a register-based communication interface-over-SCSI-over-USB. I’m not sure if brilliant or crazy. But the problem remained of understanding the commands.

At this point was hoping to get some help by looking at what commands were actually being sent to the kernel, so I downloaded the latest Windows SDK and fired up WinDbg, hoping to log the events. I didn’t that, but I did find something even more interesting. The OneTouch software and drivers have been built with debug logging still on, probably because nobody would notice there is logging unless they attach a debugger… just like I did. This was a lucky breakthrough because it allowed me to see what driver the software used (and thus its symbol table and function names — yes, PE would allow you to obfuscate the function names by using an import library, but they didn’t) and also to see what it thoughts about things.

An interesting discovery is that the software seems to communicate with its drivers via XML documents (properly human-readable ones at that), while the driver seemed to talk to the device via binary commands. Unfortunately, said commands didn’t match what I was seeing in the trace, at least not fully — I could find some subsets of data here and there, but not consistently, it looks like one of the libraries is translating from the protocol the device actually accepted to another (older?) binary protocol, to speak to a driver that then converted it to XML and to the device. This does sound dopey, doesn’t it?

Anyway, I decided to then start matching messages in the sequences. This started to be interesting. Using hexdump -C to have a human-readable copy of the content of the SCSI blocks written and read, I would see the first few lines matching between messages in the same sequence, while those after 255 bytes to be different, but in a predictable way: a four-bytes word would appear at a certain address, and the following words would have the same distance from it. I was afraid this was going to be some sort of signature or cryptographic exchange — until I compared this with the trace under WinDbg, that had nothing at all after the first few lines. I then decided to filter anything after the first 16-bytes of zeros, and compare again.

This lead to more interesting results. Indeed I could see that across the three sessions, some packets would be exactly the same, while in others the written packet would be the same and the read packet would be different. And when they would be different, there would be a byte or two different and then the last two bytes would differ. Now one of the things I did when I started looking at WinDbg, was checking the symbol table of the libraries that were used by the software, and one of them had a function that included crc_ccitt in its name. This is a checksum algorithm that LifeScan used before — but with a twist there as well, it used a non-standard (0xFFFF) seed. Copying the packet up until the checksum and pasting it in an online calculator confirmed that I now found the checksum of the packet.

At that point I opened the OneTouch UltraEasy specs (an older meter, of which LifeScan published the protocol), which shared the same checksum, and noticed at least one more similarity: the messages are framed the same with (0x02 at the beginning, 0x03 at the end). And the second byte matches the length of the packet. A quick comparison with the log I got off the debugger, and the other binary protocol does not use the framing but does use the same length specification and the same checksum algo. Although in this case I could confirm the length is defined as 16-bit, as this intermediate protocol reassembled what soon clearly appeared to be a set of separate responses into one.

Once you get to this point, figuring out the commands is much easier than you think — some of them will return things such as the serial number of the device (printed on the back), the model name, or the software version, which the debug log let me match for sure. I was confused at first because strings -el can’t find them in the binary files, but strings -eb did… they are not big-endian though. At tis point, there are a few things that need to be figured out to write a proper useful driver for the meter.

The first low-hanging fruit is usually to be found in the functions to get and set time, which, given I couldn’t see any strings around, I assumed to be some sort of timestamp — but I couldn’t find anything that looked like the day’s timestamp in the trace. To be honest, there was an easier way to figure this out, but the way I did that, was by trying to figure out the reading record format. Because something that looked like a 32-bit counter in high numbers could be found, so I compared that with one that looked like it in a promising command, and I looked at the difference — the number, interpreted as seconds, gave me a 22 weeks delta, which matched the delta between the last reading and the trace — I was onto something! Given I knew the exact timestamp of the last reading, the difference between that and the number I had brought me exactly to January 1st 2000, the device’s own epoch.

Once again, from there things get easier — the format of the records is simple, includes a counter and what I soon realized to be a lifetime counter, the timestamp with the device’s own epoch, some (still unknown) flags, and the reading value in mg/dL as usual for most devices. What was curious was that the number shown in the debug log’s XML does not match the mg/dL reading, but the data in the protocol match what the device and software show for each readings, so it’s okay.

While I was working on this, I got approached over twitter from someone having a OneTouch Select Plus meter, which is not sold in Ireland at all. I asked him for a trace of the device and I compared it with my tools and the reverse engineering I had to that point, and it appears to be using the same protocol, although it replies with a lot more data to one of the commands I have not found the meaning of (and that the device don’t seem to need — there’s no knock sequence, so it’s either to detect some other model, or a kind of ping-back to the device.) The driver I wrote should work for both. Unfortunately they are both mmol/L devices, so I can’t for sure tell which unit the device is supposed to use.

One last curiosity, while comparing the protocol as I reversed it and the OneTouch UltraEasy protocol that was published by LifeScan. Many of the commands are actually matching, including the “memory reset” one, with one difference: whereas the UltraEasy commands (after preamble) start with 0x05, the Verio commands start with 0x04 — so for instance memory reset is 05 1a on the UltraEasy, but 04 1a on the Verio.

The full documentation of the protocol as I reversed it is available on my repository and glucometerutils gained an otverio2015 driver. For the driver I needed to fix the python-scsi module to actually work to send SCSI commands over the SGIO interface in Linux, but that is fixed upstream now.

If you happen to have this device, or another LifeScan device that appears as a USB Mass Storage, but using mg/dL (or something that does not appear to work with this driver), please get in touch so I can get a USB trace of its dumping memory. I could really use the help.

I won’t be spending time reverse engineering anything this weekend, because I’m actually spending time with friends, but I’ll leave you confirming that there will be at least one more device getting reverse engineered soon, but the next post will first be a review of it. The device is the Abbott FreeStyle Libre, for which I can’t link a website, as it would just not appear if you’re not in one of (the one?) country it’s sold in. Bummer.

Diabetes control and its tech: FreeStyle Optium reverse engineering

As I said in previous posts, I have decided to spend some time reverse engineering the remaining two glucometers I had at home for which the protocol is not known. The OneTouch Verio is proving a complex problem, but the FreeStyle Optium proved itself much easier to deal with, if nothing else because it is clearly a serial protocol. Let’s see all the ducks to get to the final (mostly) working state.

Alexander Schrijver already reverse engineered the previous Freestyle protocol, but that does not work with this model at all. As I’ll say later, it’s still a good thing to keep this at hand.

The “strip-port” cable that Abbott sent me uses a Texas Instrument USB-to-Serial converter chip, namely the TIUSB3410; it’s supported by the Linux kernel just fine by itself, although I had to fix the kernel to recognize this particular VID/PID pair; anything after v3.12 will do fine. As I found later on, having the datasheet at hand is a good idea.

To reverse engineer an USB device, you generally start with snooping a session on Windows, to figure out what the drivers and the software tell the device and what they get back. Unfortunately usbsnoop – the open source windows USB snooper of choice – has not been updated in a few years and does not support Windows 10 at all. So I had to search harder for one.

Windows 7 and later support USB event logging through ETW natively, and thankfully more recently Microsoft understood that those instructions are way too convoluted and they actually provide an updated guide based on Microsoft Message Analyzer, which appears to be their WireShark solution. Try as I might, I have not been able to get MMA to provide me useful information: it shows me fine the responses from the device, but it does not show me the commands as sent by the software, making it totally useless for the purpose of reverse engineering, not sure if that’s by design or me not understanding how it works and forgetting some settings.

A quick look around pointed me at USBlyzer, which is commercial software, but both has a free complete trial and has an affordable price ($200), at least now that I’m fully employed, that is. So I decided to try it out, and while the UI is not as advanced as MMA’s, it does the right thing and shows me all the information I need.

Start of capture with USBlyzer

Now that I have a working tool to trace the USB inputs and outputs, I recorded a log while opening the software – actually, it auto-starts­ – downloading the data, checking the settings and change the time. Now it’s time to start making heads and tails of it.

First problem: TI3410 requires firmware to be uploaded when it’s connected, which means a lot of the trace is gibberish that you shouldn’t really spend time staring at. On the other hand, the serial data is transferred over raw URB (USB Request Block), so once the firmware is set up, the I/O log is just what I need. So, scroll away until something that looks like ASCII data comes up (not all serial protocols are ASCII of course, the Ultra Mini uses a binary protocol, so identifying that would have been trickier, but it was my first guess.

ASCII data found on the capture

Now with a bit of backtracking I can identify the actual commands: $xmem, $colq and $tim (the latest with parameters to set the time.) From here it would all be simple, right? Well, not really. The next problem to figure out is the right parameters to open the serial port. At first I tried the two “obvious” positions: 9600 baud and 115200 baud, but neither worked.

I had to dig up a bit more. I went to the Linux driver and started fishing around for how the serial port is set up on the 3410 — given the serial interface is not encapsulated in the URBs, I assumed there had to be a control packet, and indeed there is. Scrollback to find it in the log gives me good results.

TI3410 configuration data

While the kernel has code to set up the config buffer, it obviously doesn’t have a parser, so it’s a matter of reading it correctly. The bRequest = 05h in the Setup Packet correspond to the TI_SET_CONFIG command in the kernel, so that’s the packet I need. The raw data is the content of the configuration structure, which declares it being a standard 8N1 serial format, although 0x0030 value set for the baudrate is unexpected…

Indeed the kernel has a (complicated) formula to figure the right value for that element, based on the actual baudrate requested, but reversing it is complicated. Luckily, checking the datasheet of the USB to serial conveted I linked earlier, I can find in Section a description of that configuration structure value, and a table that provides the expected values for the most common baudrates; 0x0030 sets a rate close to 19200 (within 0.16% error), which is what we need to know.

It might be a curious number to choose for an USB to serial adapter, but a quick chat with colleagues tells me that in the early ‘90s this was actually the safest, fastest speed you could set for many serial ports providers in many operating systems. Why this is still the case for a device that clearly uses USB is a different story.

So now I have some commands to send to the device, and I get some answers back, which is probably a good starting point, from there on, it’s a matter of writing the code to send the commands and parse the output… almost.

One thing that I’m still fighting with is that sometimes it takes a lot of tries for the device to answer me, whereas the software seems to identify it in a matter of seconds. As far as I can tell, this happens because the Windows driver keeps sending the same exchange over the serial port, to see if a device is actually connected — since there is no hotplugging notifications to wake it up, and, as far as I can see, it’s the physical insertion of the device that does wake it up. Surprisingly though, sometimes I read back from the serial device the same string I just sent. I’m not sure what to do of that.

One tidbit of interesting information is that there are at least three different formats for dates as provided by the device. One is provided in response to the $colq command (that provides the full information of the device), one at the start of the response for the $xmem command, and another one in the actual readings. With exception of the first, they match the formats described by Alexander, including the quirk of using three letter abbreviation for months… except June and July. I’m still wondering what was in their coffee when they decided on this date format. It doesn’t seem to make sense to me.

Anyway, I have added support to glucometerutils and wrote a specification for it. If you happen to have a similar device but for a non-UK or Irish market, please let me know what the right strings should be to identify the mg/dL values.

And of course, if you feel like contributing another specification to my repository of protocols I’d be very happy!

Diabetes control and its tech: reverse engineering glucometer tools, introduction

In the time between Enigma and FOSDEM, I have been writing some musings on reverse engineering to the point I intended to spend a weekend playing with an old motherboard to have it run Coreboot. I decided to refocus a moment instead; while I knew the exercise would be pointless (among other things, because coreboot does purge obsolete motherboards fairly often), and I Was interested in it only to prove to myself I had the skills to do that, I found that there was something else I should be reverse engineering that would have actual impact: my glucometers.

If you follow my blog, I have written about diabetes, and in particular about my Abbott Freestyle Optium and the Lifescan OneTouch Verio, both of which lack a publicly available protocol definition, though manufacturers make available custom proprietary software for them.

Unsurprisingly, if you’re at least familiar with the quality level of consumer-oriented healthcare related software, the software is clunky, out of date, and barely working on modern operating systems. Which is why the simple, almost spartan, HTML reports generated by the Accu-Chek Mobile are a net improvement over using that software.

The OneTouch software in particular has not been updated in a long while, and is still not an Unicode Windows application. This would be fine, if it wasn’t that it also decided that my “sacrificial laptop” had incompatible locale settings, and forced me to spend a good half hour to try configuring it in a way that it found acceptable. It also requires a separate download for “drivers” totalling over 150MB of installers. I’ll dig into the software separately as I describe my odyssey with the Verio, but I’ll add this in: since the installation of the “drivers” is essentially a sequence of separate installs for both kernel-space drivers and userland libraries, it is not completely surprising that one of those fails — I forgot which command returned the error, but something used by .NET has removed the parameters that are being used during the install, so at least one of the meters would not work under Windows 10.

Things are even more interesting for FreeStyle Auto-Assist, the software provided by Abbott. The link goes to the Irish website (given I live in Dublin), though it might redirect you to a more local website: Abbott probably thinks there is no reason for someone living in the Republic to look at an imperialist website, so even if you click on the little flag on the top-right, it will never send you to the UK website, at least coming from an Irish connection… which mean to see the UK version I need to use TunnelBear. No worries though, because no matter whether you’re Irish or British, the moment when you try to download the software, you’re presented with a 404 Not Found page (at least as of writing, 2016-02-06) — I managed getting a copy of the software from their Australian website instead.

As an aside, I have been told about a continuous glucose meter from Abbott some time ago, which looked very nice, as the sensor seemed significantly smaller than other CGMs I’ve seen — unfortunately when I went to check on the (UK) website, its YouTube promotional and tutorial videos were region-locked away from me. Guess I won’t be moving to that meter any time soon.

I’ll be posting some more rants about the problems of reverse engineering these meters as I get results or frustration, so hang tight if you’re curious. And while I don’t usually like telling people to share my posts, I think for once it might be beneficial to spread the word that diabetes care needs better software. So if you feel to share this or any other of my posts on the subject please do so!