Comixology for Android: bad engineering, and an exemplary tale against DRM

I grew up as a huge fan of comic books. Not only Italian Disney comics, which are something in by themselves, but also of US comics from Marvel. You could say that I grew up on Spider-Man and Duck Avenger. Unfortunately actually holding physical comic books nowadays is getting harder, simply because I’m travelling all the time, and I also try to keep as little physical media as I manage to, given the constraint of space of my apartment.

Digital comics are, thus, a big incentive for me to keep reading. And in particular, a few years ago I started buying my comics from Comixology, which was later bought by Amazon. The reason why I chose this particular service over others is that it allowed me to buy, and read, through a single service, the comics from Marvel, Dark Horse, Viz and a number of independent publishers. All of this sounded good to me.

I have not been reading a lot over the past few years, but as I moved to London, I found that the tube rides have the perfect span of time to catch up on the latest Spider-Man or finish up those Dresden Files graphic novels. So at some point last year I decided to get myself a second tablet, one that is easier to bring on the Tube than my neat but massive Samsung Tab A.

While Comixology is available for the Fire Tablet (being an Amazon property), I settled for the Lenovo Tab 4 8 Plus (what a mouthful!), which is a pretty neat “stock” Android tablet. Indeed, Lenovo customization of the tablet is fairly limited, and beside some clearly broken settings in the base firmware (which insisted on setting up Hangouts as SMS app, despite the tablet not having a baseband), it works quite neatly, and it has a positively long lasting battery.

The only real problem with that device is that it has very limited storage. It’s advertised as a 16GB device, but the truth is that only about half of it is available to the user. And that’s only after effectively uninstalling most of the pre-installed apps, most of which are thankfully not marked as system apps (which means you can fully uninstall them, instead of just keeping them disabled). Indeed, the more firmware updates, the fewer apps that are marked as system apps it seems — in my tablet the only three apps currently disabled are the File Manager, Gmail and Hangouts (this is a reading device, not a communication device). I can (and should) probably disable Maps, Calendar, and Photos as well, but that’s going to be for later.

Thankfully, this is not a big problem nowadays, as Android 6 introduced adoptable storage which allows you to use an additional SD cards for storage, transparently for both the system and the apps. It can be a bit slow depending on the card and the usage you make of the device, but as a reading device it works just great.

You were able to move apps to the SD card in older Android versions too, but in those cases you would end up with non-encrypted apps that would still store their data on the device’s main storage. For those cases, a number of applications, including for instance Audible (also an Amazon offering) allow you to select an external storage device to store their data files.

When I bought the tablet, SD card and installed Comixology on it, I didn’t know much about this part of Android to be honest. Indeed, I only checked if Comixology allowed storing the comics on the SD card, and since I found that was the case, I was all happy. I had adopted the SD card though, without realizing what that actually meant, though, and that was the first problem. Because then the documentation from Comixology didn’t actually match my experience: the setting to choose the SD card for storage didn’t appear, and I contacted tech support, who kept asking me questions about the device and what I was trying to do, but provided me no solution.

Instead, I noticed that everything was alright: as I adopted the SD card before installing the app, it got automatically installed on it, and it started using the card itself for storage, which allowed me to download as many comicbooks as I wanted, and not bother me at all.

Until some time earlier this year, I couldn’t update the app anymore. It kept failing with a strange Play Store error. So I decided to uninstall and reinstall it… at which point I had no way to move it back to the SD card! They disabled the option to allow the application to be moved in their manifest, and that’s why Play Store was unable to update it.

Over a month ago I contacted Comixology tech support, telling them what was going on, assuming that this was an oversight. Instead I kept getting stubborn responses that moving the app to the SD card didn’t move the comics (wrong), or insinuating I was using a rooted device (also wrong). I still haven’t managed to get them to reintroduce the movable app, even though the Kindle app, also from Amazon, moves to the SD card just fine. Ironically, you can read comics bought on Kindle Store with the Comixology app but, for some reason, not vice-versa. If I could just use the Kindle app I wouldn’t even bother with installing the Comixology app.

Now I cancelled my Comixology Unlimited subscription, cancelled my subscription to new issues of Spider-Man, Bleach, and a few other series, and am pondering what’s the best solution to my problems. I could use non-adopted storage for the tablet if I effectively dedicate it to Comixology — unfortunately in that case I won’t be able to download Google Play Books or Kindle content to the SD card as they don’t support the external storage mode. I could just read a few issues at a time, using the ~7GB storage space that I have available on the internal storage, but that’s also fairly annoying. More likely I’ll start buying the comics from another service that has a better understanding of the Android ecosystem.

Of course the issue remains that I have a lot of content on Comixology, and just a very limited subset of comics are DRM-free. This is not strictly speaking Comixology’s fault: the publishers are the one deciding whether to DRM their content or not. But it definitely shows an issue that many publishers don’t seem to grasp: in front of technical problems like this, the consumer will have better “protection” if they would have just pirated the comics!

For the moment, I can only hope that someone reading this post happens to work for, or know someone working for, Comixology or Amazon (in the product teams — I know a number of people in the Amazon production environment, but I know they are far away from the people who would be able to fix this), and they can update the Comixology app to be able to work with modern Android, so that I can resume reading all my comics easily.

Or if Amazon feels like that, I’d be okay with them giving me a Fire tablet to use in place of the Lenovo. Though I somewhat doubt that’s something they would be happy on doing.

Anyone working on motherboard RGB controllers?

I have been contacted by email last week by a Linux user, probably noticing my recent patch for the gpio_it87 driver in the kernel. They have been hoping my driver could extend to IT7236 chips that are used in a number of gaming motherboards for controlling RGB LEDs.

Having left the case modding world after my first and only ThermalTake chassis – my mother gave me hell for the fans noise, mostly due to the plexiglass window on the side of the case – I still don’t have any context whatsoever on what the current state of these boards is, whether someone has written generic tools to set the LEDs, or even UIs for them. But it was an interesting back and forth of looking for leads into figuring out what is needed.

The first problem is, like most of you who already know a bit about electrical engineering and electronics, that clearly the IT7236 chip is clearly not the same series as the IT87xx chips that my driver supports. And since they are not the same series, they are unlikely to share the same functionality.

The IT87xx series chips are Super I/O controllers, which mean they implement functionality such as floppy-disk controllers, serial and parallel ports and similar interfaces, generally via the LPC bus. You usually know these chip names because these need to be supported by the kernel for them to show up in sensors output. In addition to these standard devices, many controllers include at least a set of general purpose I/O (GPIO) lines. On most consumer motherboards these are not exposed in any way, but boards designed for industrial applications, or customized boards tend to expose those lines easily.

Indeed, I wrote the gpio_it87 driver (well, actually adapted and extended it from a previous driver), because the board I was working on in Los Angeles had one of those chips, and we were interested in having access to the GPIO lines to drive some extra LEDs (and possibly in future versions more external interfaces, although I don’t know if anything was made of those). At the time, I did not manage to get the driver merged; a couple of years back, LaCie manufactured a NAS using a compatible chip, and two of their engineers got my original driver (further extended) merged into the Linux kernel. Since then I only submitted one other patch to add another ID for a compatible chip, because someone managed to send me a datasheet, and I could match it to the one I originally used to implement the driver as having the same behaviour.

Back to the original topic, the IT7236 chip is clearly not a Super I/O controller. It’s also not an Environmental Control (EC) chip, as I know that series is actually IT85xx, which is what my old laptop had. Somewhat luckily though, a “Preliminary Specifications” datasheet for that exact chip is available online from a company that appears to distribute electronics component in the general sense. I’m not sure if that was intentional or not, but having the datasheet is always handy of course.

According to these specifications, the IT7236xFN chips are “Touch ASIC Cap Button Controllers”. And indeed, ITE lists them as such. Comparing this with a different model in the same series shows that probably LED driving was not their original target, but they came to be useful for that. These chips also include an MCU based on a 8051 core, similarly to their EC solution — this makes them, and in particular the datasheet I found earlier, a bit more interesting to me. Unfortunately the datasheet is clearly amended to be the shorter version, and does not include a programming interface description.

Up to this point this tells us exactly one thing only: my driver is completely useless for this chip, as it implements specifically the Super I/O bus access, and it’s unlikely to be extensible to this series of chips. So a new driver is needed and some reverse engineering is likely to be required. The user who wrote me also gave me two other ITE chip names found on the board they have: IT87920 and IT8686 (which appears to be a PWN fan controller — I couldn’t find it on the ITE website at all). Since the it87 (hwmon) driver is still developed out-of-kernel on GitHub, I checked and found an issue that appears to describe a common situation for gaming motherboards: the fans are not controlled with the usual Super I/O chip, but with a separate (more accurate?) one, and that suggests that the LEDs are indeed controlled by another separate chip, which makes sense. The user ran strings on the UEFI/BIOS image and did indeed find modules named after IT8790 and IT7236 (and IT8728 for whatever reason), to confirm this.

None of this brings us any closer to supporting it though, so let’s take a loop at the datasheet, and we can see that the device has an I²C bus, instead of the LPC (or ISA) bus used by Super I/O and the fan controller. Which meant looking at i2cdev and lsi2c. Unfortunately the output can only see that there are things connected to the bus, but not what they are.

This leaves us pretty much dry. Particularly me since I don’t have hardware access. So my suggestion has been to consider looking into the Windows driver and software (that I’m sure the motherboard manufacturer provides), and possibly figure out if they can run in a virtualized environment (qemu?) where I²C traffic can be inspected. But there may be simpler, more useful or more advanced tools to do most of this already, since I have not spent any time on this particular topic before. So if you know of any of them, feel free to leave a comment on the blog, and I’ll make sure to forward them to the concerned user (since I have not asked them if I can publish their name I’m not going to out them — they can, if they want, leave a comment with their name to be reached directly!).

Dell XPS 13, problems with WiFi

A couple of months ago I bought a Dell XPS 13. I’m still very happy with the laptop, particularly given the target use that I have for it, but I have started noticing a list of problems that do bother me more than a little bit.

The first problem is something that I have spoken of in the original post and updated a couple of times: the firmware (“BIOS”) update. While the firmware is actually published through LVFS by Dell, either Antergos or Arch Linux have some configuration issue with EFI and the System Partition, that cause the EFI shim not to be able to find the right capsule. I ended up just running the update manually twice now, since I didn’t want to spare time to fix the packaging of the firmware updater, and trying with different firmware updates is not easy.

Also, while the new firmware updates made the electrical whining noise effectively disappear, making the laptop very nice to use in quiet hotel rooms (not all hotel rooms are quiet), it seems to have triggered more WiFi problems. Indeed, it got to the point that I could not use the laptop at home at all. I’m not sure what exactly was the problem, but my Linksys WRT1900ACv2 seems to trigger known problems with the WiFi card on this model.

At first I thought it would be a problem with using Arch Linux rather than Dell’s own Ubuntu image, that appeared to have separate Qualcomm drivers for the ath10k card. But it turns out the same error pops up repeated in Dell forums and LaunchPad too. A colleague with the same laptop suggested to just replace the card, getting rid of the whole set of problems introduced by the ath10k driver. Indeed, even looking around the Windows users websites, the recommendation appear to be the same: just replace your card.

The funny bit is that I only really noticed this when I came back from my long August trips, because since I bought the laptop, I hadn’t spent more than a few days at home at that point. I have been in Helsinki, Vancouver and Seattle, used the laptop in airports, lounges, hotels and cafes, as well as my office. And none of those places had any issue with my laptop. I used the laptop extensively to livetweet SREcon Europe from the USENIX wireless at the hotel, and it had no problem whatsoever.

My current theory for this is that there is some mostly-unused feature that is triggered by high-performance access point like the one I have at home, that runs LEDE, and as such is not something you’ll encounter in the wild. This also would explain why the Windows sites that I found referencing the problem are suggesting the card replacement — your average Windows user is unlikely to know how to do so or interested in a solution that does not involve shipping the device back to Dell, and to be fair they probably have a point, why on earth are they selling laptops with crappy WiFi cards?

So anyway my solution to this was to order an Intel 8265 wireless card which includes the same 802.11ac dual-band support and Bluetooth 4.2, and is the same format as the ath10k that the laptop comes with. It feels a bit strange having to open up a new laptop to replace a component, but since this is the serviceable version of Dell, it was not a horrible experience (my Vostro laptop still has a terrible 802.11g 2.4GHz-only card on it, but I can’t replace it easily).

Moving onto something else, the USB-C dock is working great, although I found out the hard way that if you ask Plasma, or whatever else it is that I ended up asking it to, not to put the laptop to sleep the moment the lid is closed, if the power is connected (which I need to make sure I can use the laptop “docked” onto my usual work-from-home setup), it also does not go to sleep if the power is subsequently disconnected. So the short version is that I now usually run the laptop without the power connected unless it’s already running low, and I can easily stay a whole day at a conference without charging, which is great!

Speaking of charging, turns out that the Apple 65W USB-C charger also works great with the XPS 13. Unfortunately it comes without a cable, and particularly with Apple USB-C cable your mileage may vary. It seems to be fine with the Google Pixel phone cable though. I have not tried measuring how much power and which power mode it uses, among other things because I wouldn’t know how to query the USB-C controller to get that information. If you have suggestions I’m all ears.

Otherwise the laptop appears to be working great for me. I only wish I could wake it up from sleep without opening it, when using it docked, but that’s also a minor feature.

The remaining problems are software. For instance Plasma sometimes crashes when I dock the laptop, and the new monitor comes online. And I can’t reboot while docked because the external keyboard (connected on the USB-C dock) is not able to type in the password for the full-disk encryption. Again this is a bother but not a big deal.

New laptop: Dell XPS 13 9360 “Developer Edition”

Since, as I announced some time ago, I’m moving to London in a few months, I’ve been spending the past few weeks organizing the move, deciding what I’ll be bringing with me and what I won’t. One of the things I decided to do was trying to figure out which hardware I would want with me, as I keep collecting hardware both for my own special projects and just out of curiosity.

I decided that having so many laptops as I have right now is a bad idea, and it is due time to consolidate on one or two machines if possible. In particular, my ZenBook has been showing its age, with only 4GB of RAM, and my older Latitude which is now over seven years old does not have a working battery anymore (but with 8GB of RAM it would actually been quite usable!), plus it’s way too bulky for me to keep travelling with, given my usual schedule. Indeed, to be able to have something I can play with on the road, I ended up buying an IdeaPad last year.

So thanks to the lowered value of the Sterling (which I won’t be particularly happy about once I start living there), I decided to get myself a new laptop. I decided for the Dell XPS 13, which is not quite an Ultrabook but it’s also quite handy and small. The killer feature of it for me has been having a USB-C connector and being able to charge through it, since my work laptop is a HP Chromebook 13, which also charges over USB-C, and that gives me the ability to travel with a single power brick.

I ordered it from Dell UK, delivered it to Northern Ireland then reshipped to me, and it arrived this past Friday. The configuration I bought is the i7, 16GB, QHD (3200×1800) display with Ubuntu (rather than Windows 10). I turned it on at the office, as I wanted to make sure it was all in one piece and working, and the first surprise was the musical intro that it started up with. I’m not sure if it’s Ubuntu’s or Dell’s but it’s annoying. I couldn’t skip it with the Esc button, and I didn’t figure out how to make it shut the heck up (although that may have been me not figuring out yet that the function keys are bound to special meanings first).

I also found myself confused by the fact that Dell only provided the BIOS (well, EFI) update file in MS-DOS/Windows format. Turns out that not only the firmware itself can read the file natively (after all EFI uses PE itself), but also Dells is providing the firmware through the LVFS service, that you may remember from Richard Hughes’s blog. The latest firmware for this device is not currently available, but it should be relatively soon.

Update (2017-07-26): The new firmware was release on LVFS and I tried updating it with the fwupd tool. Unfortunately the Arch Linux package does not work at all on my Antergos install. I’m not sure if it’s because the Antergos install changes some subtle parameter from the EFI install of Arch Linux itself, or because the package is completely broken. In particular it looks like the expected paths within the EFI System Partition (ESP) are completely messed up, and fwupd does not appear to identify them dynamically. Sigh.

The hardware of the laptop is pretty impressive, although I’m not a fan of the empty space near the top, that looks to me like an easy catch for cables and ties, which make me afraid for its integrity. The device is also quite denser than I was expecting: it’s quite heavier than the Zenbook, although it packs much more of a punch. The borderless screen is gorgeous but it also means the webcam is in the bottom corner of the screen rather than at the top, likely making it awkward to have a videocall. The keyboard is a bit tricky to get used to, because it’s not quite as good as the one in the ZenBook, but it’s still fairly good quality.

By the way, one of the first thing I did was replacing the Ubuntu install with an install of Antergos (which is effectively Arch Linux with an easier installer). This did mean disabling Secure Boot, but I guess I’ll have to live with it until we get a better idea of how to do Secure Boot properly on Linux and full-disk encryption.

Once I got home, I did what I do with my work laptop too: I connected it to my Anker USB-C dock, and it seemed to work alright. Except for some video corruption here and there, particularly on Konsole. Then I started noticing the lack of sound — but that turned out to be a red herring. The answer is that both the on-board speakers and the HDMI audio output are wired through the same sound interface, just appear as different “profiles”.

It wasn’t until I was already compiling webkitgtk for an hour that I noticed that the laptop wasn’t actually charging, and I thought the problem was with the dock. Instead the answered turned out to be that the HP Chromebook 13 charger is not compatible with the XPS 13, while the Chromebook Pixel charger worked fine. Why the difference? I’m not sure, I guess I need to figure out a way to inspect what is seen by the USB-C bus to figure out what the problem is with that charger. It should not be a problem of wattage, as both the HP charger and the original Dell charger provided with the laptop are 45W.

Speaking of the USB-C dock, there is a funny situation: if the laptop boots with it connected, and the lid closed, it does not appear to return the monitor on (all is fine if it boots with it disconnected). Also, it looks like the default DM provided by Antergos only shows the login page on the laptop’s screen, making it hard to log in at all. And in the usual mess that multi-screen support is with modern Linux desktops, Plasma needs to be killed and restarted to switch between the two monitors. Sigh!

As for the screen corruption that I have noted earlier, it seems to be fixed by one of these two options: upgrading to Linux 4.12 (from Arch Linux testing repository) or changing the compositor’s setting from OpenGL 2.0 to OpenGL 3.1. I think it may be the latter but I have no intention to try this out yet.

It looks like I’ll be very happy with this laptop, I just need to figure out some new workflows so that I don’t feel out of place not having Gentoo on my “workstation”.

Also, to keep with my usual Star Trek host naming, this new laptop is named everett after the (non-canon) USS Everett, which is the same class (Nova-class) as my previous laptop, which was named equinox.

Motherboard review: ASUS vs MSI

You may remember last year I bought a gamestation to play games at home (and that means running Windows on it). Last month, I had to do a relatively big change: replace the motherboard altogether. And since I now managed to compare two motherboards of about the same generation, I thought I can give a bit of a comparative review of the two.

My original motherboard was an ASUS X99-S (which right now has an absolutely crazy price!) which I coupled with an Intel 5930K (which is not sold anymore). The motherboard on paper is great, SATA3, m.2 and so on, and it may actually be good if it’s not a broken one, but mine clearly was.

The first glitch I noticed, but not paid enough attention to, was related to the USB 3 ports. While all the ports worked fine, I never managed to install the ASMedia drivers, even though the ASMedia controller was meant to be backing some of the ports, and SysRescCD was actually seeing them fine. This bothered me for a while when I had performance issues on one of my devices, but otherwise it seemed ok.

The second problem was tricky to pin down exactly if it was always there or if it was an update causing it. When I bought the Gamestation, the memory was expensive so I only got 32GB of it. A few months later, I had some spare pocket money (well, I got some bonuses that I wanted to exchange for some gratification) and bought 32GB more. Stupidly, I don’t remember if I checked if it worked fine, just trusted it. A few months later, while trying to do some big processing in Lightroom, I came to notice that Windows only saw half of the RAM. I thought it was a bad bank or something like that, but any combination of shuffling the RAM around would only have Windows seeing 32GB of it. Even though CPU-Z would see all eight banks in.

At that point, Nikolaj suggested it could be an ME problem, so I went on and re-flashed the BIOS from scratch with an SPI flash adapter, but that didn’t help. Re-seating the CPU also didn’t help. I was appalled, but it was not enough to replace the board just yet, so I put the extra RAM to the side and soldiered on. I was wrong.

Last November, literally the day after my birthday, I came back home from a trip and wanted to download some dozens of GB of pictures I took… and my computer wouldn’t boot. The bootcode showed the system blocked in a CSM (compatibility system mode) failure. Trying all the permutation of things to change helped nothing, so it was either the motherboard or the CPU — I took a bet on the motherboard given the previous history, and ordered a MSI X99 SLI Plus while I was in the US — it was significantly cheaper than in Europe.

My hunch was right and indeed, the new motherboard solved the problem. The specs between the two are about the same, actually, there is the same ASMedia USB controller, though this time the drivers install correctly, all the RAM is actually seen by the system now, and of course the computer boots. But this is just the very superficial look at it. There is something else.

Both ASUS and MSI provide software utilities for overclocking, as it is expected for motherboards designed for the Haswell-E family of processors. But the approach the two take is significantly different. ASUS encodes most of the logic in the software itself it appears, with their “DIP5” core, while MSI appears to keep it in the firmware (that also appears to make the boot process a bit slower).

ASUS utility pack is called “AISuite”, and the major version is tied to the board’s generation, version 3 for the X99 motherboards. While there has been at least one update since the time I bought the card, the last version released for the suite was on 2015-07-28. In addition to the overclocking UI, the suite includes a handful of other board-specific tools: one to set the bulk transfer mode sizes (to provide higher performance on USB3 non-UAS devices, not needed on Linux as the kernel does the right thing by default), one to allow faster charge on iPhone devices, and so on so forth. Some of this is actually quite useful, for instance the faster USB transfer actually is useful, although it also has the side effect of stopping WD SmartWave tools from recognizing the drive, and so break your backups if you decided to use WD’s own tool rather than Microsoft’s.

On the other hand, a release for the DIP5 core was released on 2016-06-29, to support the new CPUs — their 2011-3 socket is full-pin, which allowed them to support a further generation of CPUs with only firmware updates. This is effectively an update for the various drivers needed for the underlying overclocking system, as well as a complete overhaul of the Suite UI — which is likely due to actually applying a newer-generation Suite to the motherboard.

Unfortunately, the new Suite UI does not come with a new set of add-ons for charger, USB, etc. This would be okay, except the add-ons ABI changed: the moment you open the Suite app, you have to press Enter so many times, as it tries to fetch icon files that do not exist. Copying the old PNG files into the new path makes it stop throwing up these errors, but the UI clearly shows the wrong icons.

Oh and by the way, starting AISuite with a different motherboard causes Windows 10 to blue-screen. I know because after booting my gamestation with the new motherboard I was welcome by the blue screen of death and I had a sagging feeling of dismay, expecting the CPU to be broken instead (turns out no, it was all the AISuite’s fault).

What about MSI’s app then? Well, their approach appears to be significantly different: first of all the overclocking app only has the overclocking function — they rely on ASMedia’s own tooling and drivers for the USB bulk transfer reconfiguration, and provide an optional tool for the charging options. In the spirit of not reimplementing stuff, they also don’t require any new Windows driver for this, requiring you to install the Intel ME drivers instead… which was fun because the copy I had installed from before the motherboard replacement was newer than the one MSI provides on their website.

And this makes the MSI utility more interesting: last update 2016-12-06, since they use the same exact package for all their boards, it includes no board-specific features and no drivers, so updating it is significantly simpler for them.

The end result is that I’m fairly happy. MSI does not have the tons of crapware that ASUS appears to provide for their boards. They do come with a “Live Update” tool, which I wouldn’t trust, even though I have not tested. Too many of those apps have forgot to implement HTTPS, certificate validation or pinning, making them extremely risky to run, which is unfortunate.

An aside, when you replace the motherboard of your computer, most systems that use computer authorization will consider it a new computer. Including Microsoft’s own Windows 10 license handling, as the Windows 10 license is tied to a EFI variable, for what I remember.

Of all those systems, Microsoft’s was the easiest to deal with, though. The system booted as unactivated, and they do try to point you towards buying a new license, burying the right interface behind “Troubleshooting”, but once you say “I changed hardware recently”, it allows you to just replace the previous computer authorization with the current one.

Both Google Play Music and iTunes require authorizing an additional computer, and that makes it a problem if you are close to the limit (because then you may have to unauthorize them all and then re-authorize them. Stupid DRMs.

Diabetes control and its tech, take 6: multiple devices, multiple interfaces

In my previous post on the topic I complained about Abbott not being forthcoming with specifications for their devices. I’ve since then started looking into ways to sniff the protocol while keeping the device talking with the Windows software but I haven’t really had any time to proceed with the plan.

On the other hand, when I was providing another person a link to LifeScan’s full protocol specifications, they decided to tell me that my device is now completely obsolete, and they offered to send me a newer one. I filled in the form and I’m now waiting for the device itself. This means that I’ll probably be writing soon a second device driver for my glucometer utilities, for the OneTouch UltraMini. The protocol itself is quite different, as it’s this time a fully binary protocol, where dates need not as much be parsed as being formatted (they are reported as UNIX timestamps), although it loses a few features in the process, namely the comment/meal options. Not extremely important but it’s a bit sad — not that I ever used the features beside for testing.

While the new meters no longer allows you to provide your before/after meal indication, the OneTouch software does have heuristics to provide said information. I’m not sure exactly how they do that, but I would venture a guess that they use time and relative time. So if you have two measurements let’s say at 7 and 9 in the morning, it will count them as before and after breakfast, while a measurement at 11 is going to be considered as before lunch. One of my entries in the infinite TODO list is to actually implement time-based heuristics for my utilities.

Now, while my utility works fine as a CLI tool, it does not really help to keep track of diabetes over time, and does not have even a fraction of the features of LifeScan’s or Abbott’s software. One of the things that I plan on working on is a way to store the data downloaded from the meter in a local database, such as SQLite, and then show it over time with a half-decent UI. This is going to be my target for next year at least.

Also in a previous post I noted how I’d like to have an Android app to download the data on the fly. I’ve not started working on that area at all, on the other hand, I was able to secure not one but two devices supporting USB OTG. Unfortunately, neither support the PL2303 serial adapter that LifeScan uses in their cables. Since mobile devices are not my speciality, I’d love to hear from somebody who has a clue about them: does it make sense to write an userland, libusb-based implementation of the PL2303 protocol so to use it as OTG, or would time be better spent on devising a Bluetooth adapter?

On the Bluetooth adapter side, the LifeScan devices – at least the old ones but, as far as I could tell from the website, the new ones as well – use a minijack interface similar to the osmocom serial cable, but not compatible, so don’t waste your time with those cables. The OneTouch cables have Rx/Tx swapped, so that Rx is at tip, and Tx at sleeve. On the other hand if I’m not mistaken it should be possible to have a device (small or big is beside the point for development) that interface with the device as a “dock” and provides a Bluetooth serial that can work with a paired computer — what I have no idea of is whether a mobile device could use a Bluetooth connection as a serial port.

At any rate these are my current musings. When I’ll have some more details, and especially if somebody out there can give me suggestions on the direction to move to, I’ll post more. And in the mean time, if you have a glucometer, a cable, and the protocol, you can write a driver for it and send me a pull request on the GitHub repository, I’ll be happy to review and merge!

Why can’t I get easy hardware

When I bought my Latitude I complained that it seemed to me more and more like a mistake — until the kernel started shipping with the correct (and fixed) drivers, so the things that originally didn’t work right (the SD card reader, the shutdown process, the touchpad, …) started working quite nicely. As of September 2011 (one year and a quarter after I bought it), between Linux and firmware updates from Dell and Broadcom, the laptop worked almost completely — only missing par still is the fingerprint reader, which I really don’t care that much about.

Recently, you probably have seen my UEFI post where I complained that I couldn’t install Sabayon on the new Zenbook (which is where I’m writing from, right now, on Gentoo). Well, that wasn’t the only problem I got with this laptop, and I should really start reporting issues to the kernel itself, but in the mean time let me write down some notes here.

First off, the keyboard backlight is nice and all, but I don’t need it – I learnt to touch-type when I was eight – so it would just be a battery waste of time. While the keys are reported correctly, and upower supports setting the backlight, at least the stable version of KDE doesn’t seem to support the backlight setting. I should ask my KDE friends if they can point me in the right direction. Another interesting point is that while the backlight is turned on at boot, it’s off after suspension — which is probably a bug in the kernel, but it’s working fine for me.

Speaking about things not turning back on after suspension, the WLAN LED on the keyboard is not turning on, at resume. And related to that, the rfkill key doesn’t seem to work that well either. It’s not a big deal but it’s a bit bothersome, especially since I would like to turn off the bluetooth adapter only (and since that’s supposedly hardware-controlled, it should get me some more battery life).

The monitor’s backlight is even more troublesome: first problem is deciding who should be handling it — i’s either the ACPI video driver (by default), the ASUS WMI driver, or the Intel driver — of the three, the only one that make it work is the Intel driver, and I’m not even sure if that’s actually controlling the backlight or just the tint on the screen, even though, when set to zero, it turns the screen OFF, not just display it as black. It does make it bearable though.

The brightness keys on the keyboard don’t work, by the way, nor does the one that should turn on and off the light sensor — the latter, isn’t even recognized as a key by the asus-wmi driver, and I can’t be sure of the correct device ID that I should use to turn on/off said light sensor. After I hacked the driver to not expose either the ACPI or the WMI brightness interfaces, I’m able to set the brightness from KDE at least — but it does not seem to stick, if I take it down, and after some time it starts and gets back to the maximum (when the power is connected, at least).

And finally, there is the matter of the SD card reader. Yesterday I went to use it, and I found out that … it didn’t work. Even though it’s an USB device, it’s not mass-storage — it’s a Realtek USB MMC device, which does not use the standard USB interface for MMC readers at all! After some googling around, I found that Realtek actually released a driver for that, and after some more digging I found out that said driver is currently (3.7) in the staging drivers’ tree as a virtual SCSI driver (with its own MMC stack) — together with a PCI-E peer, which has been already rewritten for the next release (3.8) as three split drivers (a MFD base, a MMC driver, and a MemoryStick driver). I tried looking into working on porting the USB one as well, but it seems to be a lot of work, and Realtek (or rather, Realsil) seems to be already working on it to port it to the real kernel, so it might be worth waiting.

To be fair what dropped away the idea from me of working on the SD card driver is that to have an idea of what’s going on I have to run 3.8 — and at RC1 panics as soon as I re-connect the power cable. So even though I would like to find enough time to work on some kernel code, this is unlikely to happen now. I guess I’ll spend the next three days working on Gentoo bugs, then I have a customer to take care of, so this is just going to be dropped off my list quite quickly.

Trouble with memtest

So there is something that doesn’t feel extremely right on the tinderbox — as in there are a few logs spotting ICE and a few killed Ruby processes that make no sense at all. This is usually indication of bad memory. Now it is true that I ran a one-pass test of the memory when I got the system, and it didn’t spot anything, and that this does not happen consistently, so I wanted to give it a 24 hours memory testing — it should be easy thanks to a serial console and the memtest86+ software, no?

Well, no. Let’s start with a bit of background on the setup. Excelsior, the server that is running the two tinderbox instances, is a Supermicro barebone, which integrates an IPMI 2.0 SBMC. This allows me to control it just fine from here (Italy) while the server is back in Los Angeles. At least in theory; while the server is on a public IP, the IPMI interface is connected to the VPN of my employer back in the US, so to actually connect to it I have to jump through an SSH host — which is easy done on Linux — but not on Windows.

The serial console, by the way, is tremendously easy to get by as you can simply SSH into the IPMI IP and you can use some semi-standard commands to get to it, in particular you just need to run cd /system1/sol1 and start. Unfortunately, my first blind setup of grub (2) and the Linux kernel was wrong, as I set them to output to ttyS0 — while the IPMI is forwarding you ttyS1. And finding how to set up grub2 to use a serial console wasn’t easy.

What you have to do is editing /etc/default/grub and add this:

GRUB_TERMINAL="serial"
GRUB_SERIAL_COMMAND="serial --unit=1 --speed=115200"

This will set it to use the second serial interface (--unit=1) as 115200,8n1 (8n1 is the default). And there you go. Actually the command editing seems to work more reliably on the serial console than on the display.

So this is done, what’s the next? Well, next is getting memtest to work — it doesn’t help that the pre-compiled binary provided by the upstream project is not able to start. The problem is a “too small lower memory” which is caused by a combination of grub, BIOS and the compiled file itself. While for some systems it’s enough to use a custom compiled version such as the one provided by sys-apps/memtest86+, on Excelsior that didn’t work. So I had to go with the fallback: the ebuild installs both an old-style Linux kernel bootable binary and a netbsd-style binary; as far as I can tell the latter does not support boot parameters though.

The correct way to boot that particular method on Grub 2 is editing /etc/grub.d/40_custom and add:

menuentry "Memtest86+" {
    insmod bsd
    knetbsd /boot/memtest86plus/memtest.netbsd
}

For those wondering, yes I’m working as we speak on ebuilds that install the grub2 extra configuration file by themselves, and you should have them by the end of the day. This involves both memtest86+ and, as you’ll see in a moment, memtest86 itself. This will make it much easier to deal with the packages.

Ah of course this has to be built on my laptop, because both memtest86+ and memtest86 require a multilib compiler as they are built 32-bit. Sigh.

Unfortunately, not even that is good enough for my system. While with this code it boots, instead of refusing to do so, it seems to get stuck during initialisation and the test never starts. But how do I know that, given that memtest by default does not output to serial, and when it does it outputs on serial 0?

Well, the IPMI interfaces actually has what they call an iKVM written in Java, not to be confused with another Java IKVM — the problem with it is that it doesn’t work with IcedTea and thus you have to use Oracle’s JRE to run it; the bug involves not only Supermicro systems, but also Dell and others. Why it hasn’t been solved on the IcedTea side is something I have no idea about.

While the package uses a standard RFB/VNC protocol, it implements authentication in a non-standard way for what I can tell, so I can’t simply login as I’d like to do. It also probably either has some extensions or an extra signalling protocol, as it can be used to set up “virtual media” such as virtual CD-Roms and virtual floppy images.

Now, this latter detail should give me enough to deal with the memtest issue, as I’d just have to connect a virtual ISO of memtest to get it to work but … Java segfaults (it uses a native library) the moment I try to do so! I have yet to check whether this is simply because it’s trying to use a signalling port that is unavailable, but it doesn’t feel very likely.

Okay so memtest86+ boots as a NetBSD-style kernel, but it doesn’t seem like it’s able to do anything — what about the original project? Memtest86 is still alive and kicking, and released a new version last October (called 4.0b but versioned 4.0s) which supports “up to 32 cores and 8 TB of memory” — reading such release notes I’m afraid that the reason why Memtest86+ doesn’t work is simply that it’s too much memory, or too many cores (32 cores, 64GB of memory).

Unfortunately Memtest86 doesn’t seem to build a NetBSD kernel style binary, so I can’t boot that either. Which means I’m stuck.

Interestingly, Memtest86+ released a 5.00beta back in May — unfortunately there are two big problems: there is no download for the NetBSD kernel, and most worrisome, they didn’t release the sources. Given the project is GPL-2 and includes code from the original Memtest86 project (which the maintainer of Memtest86+ has no way to claim rights to), this is a license violation.

So now I’m using the user-space memtester, which seems to work fine even on hardened kernels, with the caveat that it doesn’t allow to test the full range of memory but just a little piece of it at a time. Sigh. No easy way out, eh?

Readying a new tinderbox? I could use your help!

I don’t think it’s much more of a secret anymore that I’m planning on moving to the United States before end of the year if I can. This means not only that I have to bring down and move around the business happening in Italy, but also that I have to find a way to rebuild my own infrastructure here in the US.

This has become more important since yesterday my home router in Italy decided to have hiccups that it didn’t have in a long time, and I’m now unable to reach any of the computers back at home. Including the monitoring services, the containers I use to build the packages for the remote servers, and so on so forth. Given I was already planning retiring Yamato after almost four years of almost uninterrupted service, simply because the mainboards started acting up, and adding to that, the fact that shipping it between EU and US is going to be a pain, and more expensive than the system is worth, I decided it’s time to get a completely new box, here in the US.

At first I was looking for dedicated servers offerings, but the monthly prices are definitely too steep for my taste — yes of course I could get one in the EU but that’s not the same thing. Besides, I’d really much prefer having something that, in the worst case, still allow me direct access to the box as needed. After discussing it with KingTaco, I decided to check out what was available from NewEgg, and the result is that I found what would be the perfect box to host the next round of testings: a dual 16-core, 32GB RAM, 4TB HDD system, for just under four thousands dollars.

Of course that’s still quite a bit of money especially considering I’m trying to account for all factors of moving here, closing the business in Italy, and still giving enough to my mother to keep on going for a while…

And here’s where the pledgie is coming into view. While I’m going to use the new box for a few personal projects as well, it’s obvious that its main use will be again tinderboxing, and since this time I don’t intend for it to be hosted within my home, I want to be able to give access to other developers to the different tinderboxes as well. Add to that the enormous amount of time required to go through the logs and report bugs (most of which I’ll be unable to spend during my work hours), and you can probably see why I’m asking again for some help to go back at tinderboxing.

While I set up the pledgie to the actual, full price of the box, this does not mean I’m not going to pay my share: both PayPal and Pledgie take out their own fees, which is actually quite steep in by itself (PayPal’s more than Pledgie’s!), and there is the issue of finding a co-location, which I’ll also take care of paying for. For more details of what I had in mind, see the following, quoted text of the Pledgie page:

Those of you who have been following me for long enough know this already, but for those who haven’t, my Tinderbox is an effort to keep compiling and installing the packages available in Gentoo to make it possible to know whether they are working correctly or not.

While this is not considered an “official” effort, the fact that I’ve not been running this for almost an year starts to show with packages being bumped without their reverse dependencies being checked, and new automagic dependencies cropping up that weren’t noticed before.

Also considering that I’m now in the process of moving from Europe to the United States, and the fact that the motherboard of the system I’ve been using to run tinderboxing has been showing problems (which is the reason why I stopped working on it), I think it’s time to get a new system (from scratch at this point), and restart the effort.

This time the system will be clearly cut so that the tinderboxing is handled in a separate environment, so that other developers will also have access to it, but I’ll also keep running some of my personal projects in there, and some work-related software. From my side, I’m going to take care of assembly and maintenance of the system, and will pay for its connectivity (likely in a co-lo near where I’m going to stay in the US).

But since it’s still a non-trivial amount of money that is required for a system powerful enough not to be replaced in the next few years, I’m going to ask for the help of you all out there. The target is set to the exact price of the system I’m aiming for on newegg but unfortunately that also means that it includes sales taxes. Please note that while the following specs could look like overkill, this time I aim on running multiple, isolated tinderboxes, in particular I have three configurations I care about: amd64, amd64-hardened and x86.

* Dual Opteron 6272 16-core 2.1GHz CPU

  • 32GB DDR3 ECC RAM
    * 4x1TB Western Digital Caviar Black HDDs
    * 32GB OCZ Onyx SSD (for operating system)

Please keep in mind that the hardware is only one fraction of the value behind the tinderbox, as the most important part is reporting the bugs, which is still not automated, so while I’m asking you to pledge money, I’m pledging a lot of my free time for this project again.

Please help, if you can. Thank you immensely.
Diego Elio Pettenò aka Flameeyes

Click here to lend your support to: Flameeyes's New Tinderbox and make a donation at www.pledgie.com !

For those who wonder how it’s possible to have a donation before the Pledgie started, I had some extra funds from previous donations which I’m pledging myself against this. Furthermore I’m going to pledge all the Flattr funds that I can get into my PayPal account in the future months. For those who don’t want to use PayPal, I’ll be tallying up donations coming in form of gift certificates for Amazon or iTunes (both US) manually, just make sure to tell if you want to have your name shown or not.

Working on kernel drivers

I have already said that I’m currently in the States for work; I’m here with Luca and we’re working on a project I won’t go into details of (not yet at least). One of my task, though, I can talk of, simply because I have posted one patch already and thus it’s not much secret.

I’m working on kernel-level support for an ITE IT8728F Super I/O chip. For those who’re not used to these names, a Super I/O chip is a chip, that usually stays behind an LPC bridge, that contains the PS/2 (Mouse, Keyboard) controller, as well as the Floppy Disk Controller, the interface to the hardware monitors, and so on so forth.

The IT87 family is a vast one, and is usually well supported in the hardware monitor departement, where even the IT8728 is currently supported out of the kernel. What it lacked, though, at first sight was support for the WTD (Watchdog Timer), that I needed. That was easy as it was just a matter of adding the IDs in the right places, which I did and sent the patch to the LKML.

What I’m working now on, instead, is the GPIO (General Purpose I/O) interface of the chip that we need to work on; it’s proving a bit more complex than I expected… this one is a driver I’m writing from scratch, as the one that was there already for another chip (IT8761) is not compatible at all with what we have here.

Another issue with this is that I’m looking at the other two drivers implemented for this family (the hardware monitor and the watchdog) and I’m thinking that having a Multifunction Device Driver for this would probably be a better idea, since there are so many knobs and tweaks we could add, simply by registering a platform device and adding a few attributes to the sysfs.

I’m not sure how much of this work I can push through as work for this project, but I’m pretty sure if I can push some more free-time hours on the task I can get this to integrate with the LED class drivers, so I can achieve what Luca thought was possible already (control the power/harddrive LEDs to perform a different task, like it’s already done on a multitude of laptops).

For the moment, I’m afraid I only have to finish the GPIO driver quickly, and send it out for review, but if you’re interested in testing or helping out with this chip, please do let me know, it could be interesting.