Diabetes control and its tech; should I build a Chrome app?

Now that I can download the data the data from three different glucometers, of two different models with my glucometer utilities I’ve been pondering on ways to implement a print out or display mode. While the text output is good if you’re just looking into how your different readings can be, when you have more than one meter it gets difficult to follow it across multiple outputs, and it’s not useful for your endocrinologist to read.

I have a working set of changes that add support for a sqlite3 backend for the tool, so you can download from multiple meters and then display them as belonging to a single series — which works fine if you are one person with multiple meters, rather than multiple users each with their glucometer. On the other hand, this made me wonder if I’m spending time working in the right direction.

Even just adding support for storing the data locally had me looking into adding two more dependencies (SQLite3 support, which yes, comes with Python, but needs to be enabled), and pyxdg (with a fallback) to make sure to put the data in the right folder, to simplify backups. Having a print-out or an UI that can display the data over time would add even more dependencies, meaning that the tool would be only really useful if packaged by distributions, or if binaries were to be distributed. While this would still give you a better tool on non-Windows OSes, compared to having no tool at all if left to LifeScan (the only manufacturer currently supported), it still is limiting.

In a previous blog post I was musing the possibility of creating an Android app that implements these protocols. This would mean already reimplementing them from scratch, as running Python on Android is difficult, so rewriting the code in a different language, such as Java or C#, would be needed — indeed, that’s why I jumped on the opportunity to review PacktPub’s Xamarin Mobile Application Development for Android, which will be posted here soon; before you fret, no I don’t think that using Xamarin for this would work, but it was still an instructive read.

But after discussing with some colleagues, I had an idea that is probably going to give me more headaches than writing an app for Android, and at the same time be much more useful. Chrome has a serial port API – in JavaScript, of course – which can be used by app developers. I don’t really look forward to implement the UltraEasy/UltraMini protocol in JavaScript (the protocol is binary structs-based, while the Ultra2 should actually be easy as it’s almost entirely 7-bit safe), but admittedly that solves a number of problems: storage, UI, print out, portability, ease-of-use.

Downloading that data to a device such as the Chromebook HP11 would also be a terrific way to make sure you can show the data to your doctor — probably more so than on a tablet, definitely more than on a phone. And I know for a fact that ChromeOS supports PL2303 adapters (the chipset used by the LifeScan cable that I’ve been given). The only problem with such an idea is that I’m not sure how HTML5 offline storage is synced with the Google account, if at all — if I am to publish the Chrome app, I wouldn’t want to have to deal with HIPPA.

Anyway, for now I’m just throwing the idea around, if somebody wants to start before me, I’ll be happy to help!

Why I don’t trust most fully automatic CI systems

Some people complain that I should let the tinderbox work by itself and either open bugs or mail the maintainers when a failure is encountered. They say that it should make it faster for the bugs to be reported and so on. I resist the idea.

While it does take time, most of the error logs I see in the tinderbox are not reported as bug. They can be either the same bug happening over and over and over again ­– like in the case of openldap lately, which fails to build with gnutls 3 – or they can be known/false positives, or they simply might not be due to the current package but something else that broke, like if a KDE library is broken because one of its dependencies changed ABI.

I had a bad encounter with this kind of CI systems when, for a short while, I worked on the ChromiumOS project. When you commit anything, a long list of buildbots pick up the commits and validate them against their pre-defined configurations. Some of these bots are public, in the sense that you can check their configuration and see the build logs, but a number of them are private and only available on Google’s network (which means you either have to be at a Google office or connect through a VPN).

I wasn’t at a Google office, and I had no VPN. From time to time one of my commits would cause the buildbots to fail and then I had to look up the failure; I think more or less half the time, the problem wasn’t a problem at all but rather one of the bots going crazy, or breaking on another commit that wasn’t flagged. The big bother was that many times the problem appeared in the private buildbots, which meant that I had no way to check the log for myself. Worse still, it would close the tree making it impossible to commit anything else than a fix for that breakage… which, even if there was one, I couldn’t do simply because I couldn’t see the log.

Now when this happened, my routine was relatively easy, but a time waster: I’d have to pop in the IRC channel (I was usually around already), and ask if somebody from the office was around; this was not always easy because I don’t remember anybody in the CEST timezone to have access to the private build logs at the time, at least not on IRC; most where from California, New York or Australia. Then if the person who was around didn’t know me at least by name, they’d explain to me how to reach the link to the build log… to which I had to reply that I have the link, but the hostname is not public, then I’d have to explain that no, I didn’t have access to the VPN….

I think in the nine months I spent working on the project, my time ended up being mostly wasted on trying to track people down, either asking them to fetch the logs for me, review my changes, or simply “why did you do this at the time? Is it still needed?”. Add to this the time spent waiting for the tree to come “green” again so I could push my changes (which often times meant waiting for somebody in California to wake up, making half my European day useless), and the fact that I had no way to test most of the hardware-dependent code on real hardware because they wouldn’t ship me anything in Europe, and you can probably see why both I didn’t want to blog about it while I was at it and why I haven’t continued longer than I did.

Now how does this relate to me ranting about CI today? Well, yesterday while I was working on porting as many Ruby packages as possible to the new testing recipes for RSpec and Cucumber, I found a failure in Bundler — at first I thought about just disabling the tests if not using userpriv, but then I reconsidered and wrote a patch so that the tests don’t fail, and I submitted it upstream — it’s the right thing to do, no?

Well, it turns out that Bundler uses Travis CI for testing all the pull requests and – ‘lo and behold! – it reported my pull request as failing! “What? I did check it twice, with four Ruby implementations, it took me an hour to do that!” So I look into the failure log and I see that the reported error is an exception that is telling Travis that VirtualBox is being turned off. Of course the CI system doesn’t come back at you to say “Sorry, I had a {male chicken}-up”. So I had to comment myself showing that the pull request is actually not at fault, hoping that now upstream will accept it.

Hopefully, after relating my experiences, you can tell why the tinderbox still applies a manual filing approach, and why I prefer spending time to review the logs instead of spending time to attach them.

The extraordinary world of libtool

A little sidenote first; since I’ve had more system freezes with Yamato, I’m currently using the Dell laptop as my main box; I’m considering creating a proper frontend for Yamato, and leave that one headless, but that’s longer term, for now I’ll be fine this way. Huge thanks to Jürgen and Cyprien for making these stressful days much more bearable.

In my current line of work, I’ve hit the worst roadblock with, well, PAM. Not on the interface level this time but rather on the build system level. The problem? Building a copy of Linux-PAM 1.1.3 (sys-libs/pam in Gentoo) for a different ROOT will fail, if in that ROOT is installed an older copy of Linux-PAM, and LDFLAGS contains the directive -L${ROOT}/lib (let’s assume 32-bit userland, never 64-bit).

Let’s analyse the current series of problems then: why would you have to mangle LDFLAGS at all? Well, it mostly relates to finding the correct library when linking up. When building within a ROOT directory for the same architecture, most libraries are linked at from the base system, this is why emerge will always install the packages for / as well as for the designated root. When cross-building, the libraries will be searched for in the SYSROOT directory (which usually is /usr/$CHOST), but emerge is not currently smart enough to deal with that case, and thus the packages will not be merged there before merging the one for ROOT. For a previous customer of mine I simply had my wrapper scripts build everything twice, once in SYSROOT and one in the actual filesystem tree.

To avoid the double/triple build of the same package, the common method is to add to the include and library search paths the paths within ROOT that should be searched. This works most of the time, but there are situations where it simply doesn’t work as intended. For instance, doing so with iproute2 will cause -L$ROOT/lib to take precedence over the -L../lib that is used by the build system to link to their internal “libutil” — in turn this will cause the linker to prefer resolving -lutil to the system copy of libutil (provided by glibc) rather than the internal convenience static library. Another good reason to avoid too-common names for convenience libraries. Fixing that was trivial (even though I don’t see it merged yet).

On the other hand, if you were to drop all the mangling of LDFLAGS, you’d be hitting issues with the .la files as libtool will then translate name references into full-paths, referencing / directly, and failing when cross-compiling arm-on-x86. Which is yet another reason why, well, we should be killing those stupid files.

But how does all this relate to Linux-PAM? Well, first of all, just like iproute2 above, the build system for Linux-PAM 1.1.3 uses the combination of library search path and name reference to link to the just-built libpam and libpam_misc, and once again this is trivial to fix (even though it requires a 30KB patch, and breaks my clean record it’s mostly an automated process; I didn’t want to use sed in the ebuild though because I would risk to keep it on new versions where hopefully it won’t be needed). Unfortunately this only fixes half the problem: Linux-PAM builds fine, but it then refuses to install.

The problem here is a pretty common one, which I have already experienced when working on xine and relates to the relinking feature; you probably have seen libtool doing relink during the install phase of a package, often taking quite a lot of time, for instance that’s the case for evolution. The reason why libtool proceeds to take care of this is that you’re building libraries (or in this case, modules/plugins/how-you-want-to-call-them) that link against a just-built library; since libtool can use rpath to make it easier to debug executables from the build directory, it has to make sure that the installed copy is not using broken paths.

I’m not sure if this is a bug or a feature, though, and libtool forces relinking even when the package does not use rpath, or when they are not used at all during build debug (because fast-install is enabled). Also, for just about the same problem with paths, libtool does not use path-based linking during the relink, but rather it uses name references, that are susceptible to the LDFLAGS mangling stated above.

For now, the solution I found is a hack: I simply vouch to libtool that I don’t want it to re-link all the PAM modules during install, which both solves the build problem and should reduce the time needed for Linux-PAM to build. Unfortunately this does not solve it for all the other packages out there with the same problem. I guess it’s time to look for a solution libtool side.

Gold rush — The troubles of using gold for testing

In my previous post I have listed a few good things that come with the stricter behaviour implemented by the gold linker, and why I’d like to use this for testing. You also know that the current released versions are definitely unreliable and unusable for testing (or any other) purposes.

Well, version 2.20 (which is the latest “stable” release of binutils) is quite ancient by now, and indeed, 2.21 is coming out soon, so I actually went on to test the status of gold in that version: it works, sort-of. Indeed, with the current working version of gold, that, among other things, can – and I’d venture to say should – be built separately, you can build a good deal of packages. But by no mean all.

It’s not just a matter of what fails because of the underlinking prevention of which, I hope, I have conveyed the positive effects and the reasons why we should enforce it more strongly, but rather of the fact that we know for a fact of at least one bug where gold fails to execute properly when correctly using the documented syntax.

The above bug is a split-up of the issues that fuse itself has when built with gold: from one side there was a bug in fuse (the same symbol listed in two version names within the linker script), and that was fixed upstream; on the other hand, the following syntax, described by the binutils manual itself:

__asm__(".symver original_foo,foo@");

This is supposed to declare the symbol foo in the base version; it is not replaceable with a linker script either since there is no way to define the base version in there if any other version is defined. On the other hand, gold has gone up to now without implementing the feature. Willingly. And it’s actually not an entirely bad reason why it has been doing so: the reference implementation for this stuff (GLIBC) Is broken in this regard.

Indeed, it seems like, instead of actually using the base version first when a symbol without version is provided, GLIBC prefers another symbol, not even the default symbol — most likely, it depends on the order of the symbols in the ELF file. The result is, at any rate, pretty bad, because the binary compatibility between version 2.0 and all the following ones, for what concerns FUSE, is not really present: binary compatibility is ensured only since they started providing versioned symbols, so from version 2.2 onward.

Now, how to move forward from here depends vastly on whether version 2.21 will have the bug fixed or not; if not, then I think the only way out to solve this would be to install a separate gold package, installing a binutils-config configuration file to select it as default linker — this one is a nice trick, whose idea I lifted directly from the chromiumos-overlay binutils ebuild.

Indeed, using a separate ebuild would also help if we were to find some other problem with gold, failing to build or producing bad results for a package; I’m pretty sure that if my tinderbox would run with gold, it would be the big deployment it would need to be called “usable”. Yes I know there are other projects using it, but given there are still relatively big issues like the one I pictured above, I expect that it wasn’t really polished around enough yet. This would probably take care of it.

Google and software mediocrity

I haven’t commented very much, if at all, on most of the new Google projects, which include Chrome, Chromium and Chrome OS; today since I’m currently waiting on a few long-running tasks to complete, I’d like to spend my two eurocents on it.

You can already guess from the title of this post that I’m really sceptical about Google entering the operating system marked; the reason for that is that I haven’t really seen anything in Google strategy that would leave us expecting a very good product from them in this area. While Google is certainly good in providing search services, and GMail is also my email provider of choice, there are quite a few shortcomings that I see in their software and that does not make me count on Chrome OS being any more good that Windows XP is.

First, let’s just say that Google Chrome is not the first software that Google released for the desktop; there has been quite a few other projects before, like for instance Google Talk. Since I have a personal beef with this, I’d like to go on a bit about it. When Google launched their own Instant Message service for the masses, through GMail and a desktop, called Google Talk and base on the XMPP protocol, there has been quite some talk around because, while using the same protocol we know as Jabber, it didn’t connect to the Server-to-Server Jabber network that allows for multiple Jabber servers’ users to communicate; with time this S2S support was added and now a GTalk user can talk with any Jabber user, so as a service, it’s really not bad at all, and you can use any Jabber client to connect to GTalk.

The Windows client, though, seems to be pretty much abandoned, I haven’t seen updates in a while (although I might not have noticed in the past month or two), it lacks quite a few features like merging of multiple usernames in a single contact and stuff like that. Now, at the same time as releasing the Windows client, or about the same time, Google released specifics for their extensions that allow audio (and video?) chat over XMPP-negotiated connection, and a library (libjingle) for other clients to implement this protocol.

The library, unfortunately, ended up having lots of shortcomings, and most projects decided to import and modify it, then it was forked, at least once but I think even twice, cut down and up and so much mangled that it doesn’t probably look anywhere like the original one from Google. And yet, the number of clients that do support GTalk audio/video extension is… I have no idea, Empathy does support it if I recall correctly, but last time I tried, it didn’t really work that well. As far as I know, libpurple, that is used by both Pidgin and Adium, and which would cover clients for all the major operating systems (free or not) does not seem to support them.

Now, why do I consider GTalk a mediocre software does not limit itself to the software that Google provides, it’s a matter of how they played their cards. It seems to me that instead of trying to push themselves as the service provider, they wanted to push themselves as a software provider as well, and the result is that beside Empathy (which is far from an usable client in my opinion), there is no software that seems to be implementing their service properly. They could have implemented, or paid to implement or something like that, their extensions in libpurple and that would have given them an edge; they could have worked with Apple (considering they are working with them closely already) so that iChat could work with GTalk’s audio and video extensions (instead iChat AV from Leopard uses a different protocol that only works between Macs), and so on.

What about Google Chrome? Well when it was announced and released I was blocked in hospital so I lost most of the hype done in the first days; when I finally went to test it, almost a month later, I was surprised at how pointless it seemed to me. Why? Because for what I can see it does not render text as good as Firefox or Safari on Windows, it’s probably faster than them, but then again most people don’t care (at least in Italy, Internet connections are so slow you don’t notice), and there is one important problem: the Google bias of the browser.

I think lots of people criticised the way Microsoft originally treated Internet Explorer and their Internet services before. to the point that now Microsoft allows you to set Google as provider for search in the default install. Well, I don’t see Chrome as anything much different: it’s a browser that is tailored to suit Google’s services, and of course the development of it will suit that too. Will it ever get an advertising block feature, like is available for Firefox, Konqueror and Safari? Probably not because Google takes a good share of revenue out of Internet-based advertising. Will it ever get a delicious extension? Probably not because that’s a Yahoo! service nowadays, and Google has its own alternative.

Now, I don’t want to downplay the important technical innovation of Google chrome, even when they are very basic like the idea of splitting the tabs by process; and indeed I think I have read that Mozilla is now working on implementing a similar feature on the next Firefox major change; this is what we actually get out of the project, not Chromium itself.

Then there is Android; I don’t think I can really comment on this, but at least for what I can see, there is not really much going on with Android: nobody asked me yet if I develop for Android, while I got a few requests for Symbian and iPhone development in the past year or so. Android phones does not seem to shine with the non-technical people, and the technical people at least in Italy are unlikely to pay the price you got to pay to get the Android-based HTC phones with Vodafone and TIM.

By contrasting with Nokia, Google fragmented the software area even more. While Google already provided mobile-optimised services on the web, and some Java-based software to access their services with J2ME-compatible phones, they also started providing applications for Nokia’s Symbian-based phones. Unfortunately this software does not shine, with the exception of Google Maps, which works pretty well and integrates itself with Nokia pretty decently; in particular the “main” Google application for Nokia, crashed twice my E75!, I ended up removing it and living without it (the YouTube application sort of works, the GMail application also “sort of” works, but with the new IMAP client is really pointless to me). So we have mediocre software from Google for Nokia phone, and probably no good reason for Google to improve on it.

But there are also things that haven’t been implemented by Google at all, for instance there is no GTalk client for Nokia phones, or a web-based version for mobile phones, which would have been a killer feature! Instead Nokia implemented its own Nokia Chat, which now became Contacts for Ovi, which also uses XMPP, which also has S2S, but which does not allow you to use GTalk accounts requiring you to have two different users: one for computers and one for the mobile phone. And similarly, with just partially-working Google Sync for Nokia phones, in particular with no support for syncing with the Google Calendar, and with a tremendous loss of detail when syncing contacts, Google loses to Nokia’s Ovi sync support as well.

Now, I’m not a market analyst and I really like to stay away from marketing, but I really don’t see Google as a major player for Software development, I’d really have preferred they started improving the integration of their services with Free Software like Evolution (whose Google Calendar integration sucks way too much, and whose IMAP usage of GMail causes two copies of each sent message to be stored on the server, as well as creating a number of folders/labels that shouldn’t be there at all!), rather than having a new “operating system”.

There are more details I’m sceptic about, like hardware support (of which I’ll leave Mathew Garrett to explain since he knows the matter better) and software support, but for those I’ll wait to see when they actually deliver something.