On Android Launchers

Usual disclaimer, that what I’m writing about is my own opinions, and not those of my employer, and so on.

I have a relationship that is probably best described as love/hate/hate with Android launchers, from the first Android phone I used — the Motorola Milestone, the European version of the Droid. I have been migrating to new launcher apps every year of two, sometimes because I got a new launcher with the firmware (I installed an unofficial CyanogenMod port on the Milestone at some point), or with a new phone (the HTC Desire HD at some point, which also got flashed with CyanogenMod), or simply because I got annoyed with one and try a different one.

I remember for a while I was actually very happy with HTC’s “skin”, which included the launcher, which came with beautiful alpha-blended widgets (a novelty at the time), but I replaced it with, I think, ADW Launcher (the version from the Android Market – what is now the Play Store – not what was on CyanogenMod at that point). I think this was the time when the system apps could not be upgraded via the Store/Market distribution. To make the transition smoother I even ended up looking for widget apps, including a couple of “pro” versions, but at the end of the day grew tired of those as well.

At some point, I think upon suggestion from a colleague, I jumped onto the Aviate launcher, which was unfortunately later bought by Yahoo!. As you can imagine, Yahoo!’s touch was not going to improve the launcher at all, to the point that one day I got annoyed enough I started looking into something else.

Of all the launchers, Aviate is probably the one that looked the most advanced, and I think it’s still one of the most interesting ideas: it had “contextual” pages, with configurable shortcuts and widgets, that could be triggered by time-of-day, or by location. This included the ability, for instance, to identify when you were in a restaurant and show FourSquare and TripAdvisor as the shortcuts.

I would love to have that feature again. Probably even more so now, as the apps I use are even more modal: some of them I only use at home (such as, well, Google Home, the Kodi remote, or Netflix), some of them nearly only on the go (Caffe Nero, Costa, Google Pay, …). Or maybe what I want is Google Now, which does not exist anymore, but let’s ignore that for now.

The other feature that I really liked about Aviate was that it introduced me to the feature that I’ll call jump-to-letter: the Aviate “app drawer” kept apps organised by letter, separated. Which meant you could just tap on the right border of your phone, and you would jump to the right letter. And having the ability to just go to N to open Netflix is pretty handy. Particularly when icons are all mostly the same except for maybe colour.

So when I migrated away from Aviate, I looked for another launcher with a similar jump-to-letter feature, and I ended up finding Action Launcher 3. This is probably the launcher I used the longest; I bought the yearly supporter IAP multiple times because I thought it deserved it.

I liked the idea of backporting the feature of what was originally the Google Now Launcher – nowadays known as the Pixel Launcher – that would allow using the new features announced by Google for their own phones on other phones already on the market. At some point, though, it started pushing the idea of sideloading an APK so that the launcher could also backport the actual Google Now page — it made me very wary and never installed it, it would have needed too many permissions. But it became too pushy when it started updating every week, replacing my default home page with its own widgets. That was too much.

At that point I looked around and found Microsoft Launcher, which was (and is) actually pretty good. While it includes integration for Microsoft services such as Cortana, they kept all the integration optional, so I did set it up with all the features disabled, and kept the stylish launcher instead. With jump-to-letter, and Bing’s lovely daily wallpapers, which are terrific, particularly when they are topical.

It was fairly lightweight, while having useful features, including the ability to hide apps from the drawer, including those that can’t be uninstalled from the phone, or that have an app icon for no reason, such as SwiftKey and Gboard, or many “Pro” license key apps that only launch the primary app.

Unfortunately last month something started going wrong, either because of a beta release or something else, and the Launcher started annoying me. Sometimes I would tap the Home button, and the Launcher would show up with no icons and no dock, the only thing I could do was to go to the Apps settings and force stop it. It also started failing to draw the AIX Weather Widget, which is the only widget I usually have on my personal phone (the work phone has the Calendar on it). I gave up, despite one of the Microsoft folks contacting me on Twitter asking for further details so that they can track down the issues.

I decided to reconsider the previous launchers I used, but I skipped over both Action Launcher (too soon to reconsider I guess) and Aviate (given the current news between Flickr and Tumblr, I’m not sure I trust them — and I didn’t even check to make sure it still is maintained). Instead I went for Nova Launcher, which I used before. It seems to be fairly straightforward, although it lacks the jump-to-letter feature. It worked well enough when I installed it, and it’s very responsive. So I went for that for now. I might reconsider more of them later.

One thing that I noticed, that all three of Action Launcher, Microsoft Launcher, and Nova Launcher do, is to allow you to back up your launcher configuration. But none of them do it through the normal Android backup system, like WhatsApp or Viber. Instead they let you export a configuration file you can reload. I guess it might be so you can copy your home screen from one phone to the other, but… I don’t know, I find it strange.

In any case, if you have suggestions for the best Android launcher, I’m happy to hear them. I’m not set on my way with Nova Launcher, and I’m happy to pay a reasonable amount (up to £10 I would say) for a “Pro” launcher, because I know it’s not cheap to build them. And if any of you know of any “modal” launcher that would allow me to change the primary home screen depending on whether I’m home or not (I don’t particularly need the detail that Aviate used to provide), I would be particularly happy.

Comixology for Android: bad engineering, and an exemplary tale against DRM

I grew up as a huge fan of comic books. Not only Italian Disney comics, which are something in by themselves, but also of US comics from Marvel. You could say that I grew up on Spider-Man and Duck Avenger. Unfortunately actually holding physical comic books nowadays is getting harder, simply because I’m travelling all the time, and I also try to keep as little physical media as I manage to, given the constraint of space of my apartment.

Digital comics are, thus, a big incentive for me to keep reading. And in particular, a few years ago I started buying my comics from Comixology, which was later bought by Amazon. The reason why I chose this particular service over others is that it allowed me to buy, and read, through a single service, the comics from Marvel, Dark Horse, Viz and a number of independent publishers. All of this sounded good to me.

I have not been reading a lot over the past few years, but as I moved to London, I found that the tube rides have the perfect span of time to catch up on the latest Spider-Man or finish up those Dresden Files graphic novels. So at some point last year I decided to get myself a second tablet, one that is easier to bring on the Tube than my neat but massive Samsung Tab A.

While Comixology is available for the Fire Tablet (being an Amazon property), I settled for the Lenovo Tab 4 8 Plus (what a mouthful!), which is a pretty neat “stock” Android tablet. Indeed, Lenovo customization of the tablet is fairly limited, and beside some clearly broken settings in the base firmware (which insisted on setting up Hangouts as SMS app, despite the tablet not having a baseband), it works quite neatly, and it has a positively long lasting battery.

The only real problem with that device is that it has very limited storage. It’s advertised as a 16GB device, but the truth is that only about half of it is available to the user. And that’s only after effectively uninstalling most of the pre-installed apps, most of which are thankfully not marked as system apps (which means you can fully uninstall them, instead of just keeping them disabled). Indeed, the more firmware updates, the fewer apps that are marked as system apps it seems — in my tablet the only three apps currently disabled are the File Manager, Gmail and Hangouts (this is a reading device, not a communication device). I can (and should) probably disable Maps, Calendar, and Photos as well, but that’s going to be for later.

Thankfully, this is not a big problem nowadays, as Android 6 introduced adoptable storage which allows you to use an additional SD cards for storage, transparently for both the system and the apps. It can be a bit slow depending on the card and the usage you make of the device, but as a reading device it works just great.

You were able to move apps to the SD card in older Android versions too, but in those cases you would end up with non-encrypted apps that would still store their data on the device’s main storage. For those cases, a number of applications, including for instance Audible (also an Amazon offering) allow you to select an external storage device to store their data files.

When I bought the tablet, SD card and installed Comixology on it, I didn’t know much about this part of Android to be honest. Indeed, I only checked if Comixology allowed storing the comics on the SD card, and since I found that was the case, I was all happy. I had adopted the SD card though, without realizing what that actually meant, though, and that was the first problem. Because then the documentation from Comixology didn’t actually match my experience: the setting to choose the SD card for storage didn’t appear, and I contacted tech support, who kept asking me questions about the device and what I was trying to do, but provided me no solution.

Instead, I noticed that everything was alright: as I adopted the SD card before installing the app, it got automatically installed on it, and it started using the card itself for storage, which allowed me to download as many comicbooks as I wanted, and not bother me at all.

Until some time earlier this year, I couldn’t update the app anymore. It kept failing with a strange Play Store error. So I decided to uninstall and reinstall it… at which point I had no way to move it back to the SD card! They disabled the option to allow the application to be moved in their manifest, and that’s why Play Store was unable to update it.

Over a month ago I contacted Comixology tech support, telling them what was going on, assuming that this was an oversight. Instead I kept getting stubborn responses that moving the app to the SD card didn’t move the comics (wrong), or insinuating I was using a rooted device (also wrong). I still haven’t managed to get them to reintroduce the movable app, even though the Kindle app, also from Amazon, moves to the SD card just fine. Ironically, you can read comics bought on Kindle Store with the Comixology app but, for some reason, not vice-versa. If I could just use the Kindle app I wouldn’t even bother with installing the Comixology app.

Now I cancelled my Comixology Unlimited subscription, cancelled my subscription to new issues of Spider-Man, Bleach, and a few other series, and am pondering what’s the best solution to my problems. I could use non-adopted storage for the tablet if I effectively dedicate it to Comixology — unfortunately in that case I won’t be able to download Google Play Books or Kindle content to the SD card as they don’t support the external storage mode. I could just read a few issues at a time, using the ~7GB storage space that I have available on the internal storage, but that’s also fairly annoying. More likely I’ll start buying the comics from another service that has a better understanding of the Android ecosystem.

Of course the issue remains that I have a lot of content on Comixology, and just a very limited subset of comics are DRM-free. This is not strictly speaking Comixology’s fault: the publishers are the one deciding whether to DRM their content or not. But it definitely shows an issue that many publishers don’t seem to grasp: in front of technical problems like this, the consumer will have better “protection” if they would have just pirated the comics!

For the moment, I can only hope that someone reading this post happens to work for, or know someone working for, Comixology or Amazon (in the product teams — I know a number of people in the Amazon production environment, but I know they are far away from the people who would be able to fix this), and they can update the Comixology app to be able to work with modern Android, so that I can resume reading all my comics easily.

Or if Amazon feels like that, I’d be okay with them giving me a Fire tablet to use in place of the Lenovo. Though I somewhat doubt that’s something they would be happy on doing.

More smartphones shenanigans: Ireland and the unlocked phones

In my previous rant I have noted that in Ireland it’s next to impossible to buy unlocked phones. Indeed when I went to look for a phone to travel to China at Carphone Warehouse (which at least in the UK is owned by Samsung), while they had plenty of selections for the phones, they all came with contracts.

Contracts are useful for most people, since effectively the carrier is giving you a discount on a phone so that you commit to stay their customer for a certain amount of time. When you do this, they lock you to their network, so that you can’t just switch to another carrier without either giving them their due in subscriptions or paying back the discount they gave you on the phone. In general, I see this approach as reasonable, although it has clearly created a bit of a mess to the market, particularly on the cheaper phone scale.

I have to admit that I have not paid enough attention to this in Ireland up to now simply because I have been using my company-provided phone for most of my day to day travel. Except in China, where it would not be really appropriate. So when I had to go back to Shanghai, I found myself in need of a new phone. I ended up buying one at Argos because they could source one for me by the following day, which is what I needed, and they also had last year’s Sony flagship device (Xperia X) at a decent discount, particularly when compared to the not-much-better Xperia XZ. Alternatively, Amazon would have worked, but that would have taken too long, and the price was actually lower at Argos, for this particular model.

As it is usual for most Android phones, the device started running through a number of system software updates as it was turned up. Indeed, after three cycles the device, which started off with Android 6.0, ended up on 7.0. Not only that, but by now I know that Sony appears to care about the device quite a bit. While they have not updated to 7.1, they have pushed a new system software — I noticed because my phone started downloading it while in Changi airport, in Singapore, while connected to a power pack and the Airport’s WiFi! With this update, the phone is running Android security update as of May 1st 2017.

That made me compare it with the Xperia XA, the locked phone I bought from Three, and that I now managed to unlock. The phone came “branded” by Three Ireland, which for the most part appeared to just mean it splashed their custom logo at boot. Unlocking the phone did not make it update to a newer version, or de-brand itself. But despite being the cheaper version of the X, and theoretically the same generation, it was still stuck on Android 6.0.

Indeed, before the last update, probably released at the same time as the latest Xperia X firmware, the security patch level was reported as April 1st 2016, over a year ago! Fortunately the latest update at least brings it to this year, as now the patch level is January 5th, 2017. As it turns out, even the non-branded versions of the phone is only available up to Android 6.0. At least I should say hat tip to Sony for actually caring about users, at least enough to provide these updates. My Samsung Tab A is at security level 1st June 2016, and it had no software updates in nearly as much time.

There is officially no way to de-brand a phone, but there are of course a number of options out there on how to do that otherwise, although a significant amount of them relied on CyanogenMod and nowadays they will rely on… whatever the name of the new project that forked from that is. I did manage to bring the phone to a clean slate with somewhat sketchy instructions, but as I said even the debranded version did not update to Android 7.0 and I’m not sure if now I would have to manually manage software update. But since the phone does not seem to remember that the phone ever was branded, and there is no Three logo, I guess it might be alright. And since I did not have to unlock the bootloader, I’m relatively safe that the firmware was signed by Sony to begin with.

What I found that is interesting in from using the tool to download Sony’s firmware, is that most of their phones are indeed sold in Ireland, but there is no unbranded Irish firmware. There are, though, a number of unbranded firmwares for other countries, including UK. My (unbranded, unlocked) Xperia X is indeed marked down as a UK firmware. Effectively it looks like that Ireland is once again acting like “UK lite” by not having its own devices, and instead relying on the UK versions. Because who would invest time and energy to cather to the 4.5M people market we have here? Sigh.

Sniffing on an Android phone with Wireshark

In my review of the iHealth glucometer I pointed out that I did indeed check if the app talked with the remote service over TLS or not. This was important because if it didn’t, it meant it was sending medical information over plaintext. There are a few other things that can go wrong, they can for instance not validate the certificate provided over TLS, effectively allowing MITM attacks to succeed, but that’s a different story altogether, so I won’t go there for now.

What I wanted to write about is some notes about my experience, if nothing else because it took me a while to get all the fragments ready, and I could not find a single entry anywhere that would explain what the error message I was receiving was about.

First of all, this is about the Wireshark tool, and Android phones, but at the end of the day you’ll find something that would work almost universally with a bunch of caveats. So make sure you get your Wireshark installed, and make sure you never run it as root for your own safety.

Rick suggested to look into the androiddump tool that comes with Wireshark; on Gentoo that requires enabling the right USE flag. This uses the extcap interface to “fetch” the packets to display from a remote source. I like this idea among other things because it splits the displaying/parsing from the capturing. As I’ll show later, this is not the only useful tool using the interface.

There are multiple interfaces that androiddump can capture from; that does include the logcat output, that makes it very useful when you’re debugging an application in realtime, but what I cared about was sniffing the packets from the interfaces on the device itself. This kept failing with the following error:

Error by extcap pipe: ERROR: Broken socket connection.

And no further debugging information available. Googling for a good half hour didn’t bring me anywhere, I even started strace‘ing the process (to the point that Wireshark crashed in a few situations!) until I finally managed to figure out the right -incantation- invokation of the androiddump tool… that had no more information even in verbose mode, but at least it told me what it was trying to do.

The explanation is kind of simple: this set of interfaces is effetively just a matrioska of interfaces. Wireshark calls into extcap, that calls into androiddump, that calls into adb, that calls into tcpdump on the device.

And here is the problem: my device (a Sony Xperia XA from 3 Ireland) has indeed a tcpdump command, but the only thing it does is returning 1 as return value, and that’s it. No error message and not even a help output to figure out if you need to enable somethihng. I have not dug into the phone much more because I was already kind of tired of having to figure out pieces of the puzzle that are not obvious at all, so I looked for alternative approaches.

Depending on the working system you use to set the capture up, you may be able to set up your computer to be an access point, and connect the phone to it. But this is not easy particularly on a laptop with already-oversubscribed USB ports. So I had to look for alternatives.

On the bright side, my router is currently running OpenWRT (with all the warts it has). Which means I have som leeway on the network access already. Googling around would suggest setting up a tee: tell iptables to forward a copy of every single packet coming from or to the phone to another mac address. This is relativel expensive, and no reliable over WiFi networks anyway, beside increasing congestion on an already busy network.

I opted instead to use another tool that is available in extcap: ssh-based packet captures. In Gentoo these require the sshdump and libssh USE flags enabled. With this interface, Wireshark effectively opens a session via SSH to the router, and runs tcpdump on it. It can also use dumpcap or tshark, which are Wireshark-specific tools, and would be significantly more performant, but there is no build for them on OpenWRT so that does not help either.

While this actually increases the amount of traffic over WiFi compared to the tee option, it does so over a reliable channel, and it allows you to apply capture filters, as well as start and stop capture as needed. I ended up going for ths option, and the good thing with this is that if you know the hardware addresses of your devices, you can now very easily sniff any of the connected clients just by filtering on that particular address, which opens for interesting discoveries. But that’s for another day.

Technology and society, the cellphone example

After many months without blogging, you can notice I’m blogging a bit more about my own opinions than before. Part of it is because these are things I can write about without risking conflicts of interests with work, so that makes it easier to write, and part of it is because my opinions differing from what I perceive as the majority of Free Software advocates. My hope is that providing my opinions openly may, if not sway the opinion of others, find out that there are other people sharing them. To make it easier to filter out I’ll be tagging them as Opinions so you can just ignore them, if you use anything like NewsBlur and its Intelligence Trainer (I love that feature.)

Note: I had to implement this in Hugo as this was not available when I went to check if the Intelligence Trainer would have worked. Heh.

Okay, back on topic. You know how technologists, particularly around the Free Software movement, complain abut the lack of openness in cellphones and smartphones? Or of the lack of encryption, or trustworthy software? Sometimes together, sometimes one more important than the other? It’s very hard to disagree with the objective: if you care about Free Software you want more open platforms, and everybody should (to a point) care about safety and security. What I disagree with is the execution, for the most part.

The big problem I see with this is the lack of one big attribute for their ideal system: affordability. And that does not strictly mean being cheap, it also means being something people can afford to use — Linux desktops are cheap, if you look only at the bottom line of an invoice, but at least when I last had customers as a -Sysadmin for hire- Managed Services Provider, none of them could afford Linux desktops: they all had to deal with either proprietary software as part of their main enterprise, or with documents that required Microsoft Office or similar.

If you look at the smartphone field, there have been multiple generations of open source or free software projects trying to get something really open out, and yet what most people are using now is either Android (which is partly but not fully open, and clearly not an open source community) or iOS (which is completely closed and good luck with it.) These experiments. were usually bloody expensive high-end devices (mostly with the excuse of being development platforms) or tried to get the blessing of “pure free software” by hiding the binary blobs in non-writeable flash memory so that they could be shipped with the hardware but not with the operating systems.

There is, quite obviously, the argument that of course the early adopters end up paying the higher price for technology: when something is experimental it costs more, and can only become cheaper with enough numbers. But on the other hand, way too many of the choices became such just for the sake of showing off, in my opinion. For instance in cases like Nokia’s N900 and Blackphone.

Nowadays, one of the most common answers when talking about the lack of openness and updates of Android is still CyanogenMod despite some of the political/corporate shenanigans happening in the backstory of that project. Indeed, as an aftermarket solution, CyanogenMod provides a long list of devices with a significantly more up to date (and thus secure) Android version. It’s a great project, and the volunteers (who have been doing the bulk of the reverse engineering and set up for the builds) did a great job all these years. But it comes with a bit of a selection bias. It’s very easy to find builds for a newer flagship Android phone, even in different flavours (I see six separate builds for the Samsung Galaxy S4, since each US provider has different hardware) but it’s very hard to find up to date builds for cheaper phones, like the Huawei Y360 that Three UK offers (or used to offer) for £45 a few months back.

I can hear people saying “Well, of course you check before you buy if you can put a free ROM on it!” Which kind of makes sense if what constraints your choice is the openness, but expecting the majority of people to care about that primarily is significantly naïve. Give me a chance to explain my argument for why we should spend a significant amount of time working on the lower end of the scale rather than the upper.

I have a Huawei Y360 because I needed a 3G-compatible phone to connect my (UK) SIM card while in the UK. This is clearly a first world problem: I travel enough that I have separate SIM cards for different countries, and my UK card is handy for more than a few countries (including the US.) On the other hand, since I really just needed a phone for a few days (and going into why is a separate issue) I literally went to the store and asked them “What’s the cheapest compatible phone you sell?” and the Y360 was the answer.

This device is what many people could define craptastic: it’s slow, it has a bad touchscreen, very little memory for apps and company. It comes with a non-stock Android firmware by Huawei, based on Android 4.4. The only positive sides for the device are that it’s cheap, its battery actually tends to last, and for whatever reason it allows you to select GPS as the timesource, which is something I have not seen any other phone doing in a little while. It’s also not fancy-looking, it’s a quite boring plastic shell, but fairly sturdy if it falls. It’s actually fairly well targeted, if what you have is not a lot of money.

The firmware is clearly a problem in more than one way. This not being just a modified firmware by Huawei, but a custom one for the provider means that the updates are more than just unlikely: any modification would have to be re-applied by Three UK, and given the likely null margin they make on these phones, I doubt they would bother. And that is a security risk. At the same time the modifications made by Huawei to the operating system seem to go very far on the cosmetic side, which makes you wonder how much of the base components were modified. Your trust on Huawei, Chinese companies, or companies of any other country is your own opinion, but the fact that it’s very hard to tell if this behaves like any other phone out there is clearly not up for debate.

This phone model also appears to be very common in South America, for whatever reason, which is why googling for it might find you a few threads on Spanish-language forums where people either wondered if custom ROMs are available, or might have been able to get something to run on it. Unfortunately my Spanish is not functional so I have no idea what the status of it is, at this point. But this factoid is useful to make my point.

Indeed my point is that this phone model is likely very common with groups of people who don’t have so much to spend on “good hardware” for phones, and yet may need a smartphone that does Internet decently enough to be usable for email and similar services. These people are also the people who need their phones to last as long as possible, because they can’t afford to upgrade it every few years, so being able to replace the firmware with something more modern and forward looking, or with a slimmed down version, considering the lack of power of the hardware, is clearly a thing that would be very effective. And yet you can’t find a CyanogenMod build for it.

Before going down a bit of a road about the actual technicalities of why these ROMs may be missing, let me write down some effectively strawman answers to two complaints that I have heard before, and that I may have given myself when I as young and stupid (now I’m just stupid.)

If they need long-lasting phones, why not spend more upfront and get a future-proof device? It is very true that if you can afford a higher upfront investment, lots of devices become cheaper in the long term. This is not just the case for personal electronics like phones (and cameras, etc.) but also for home hardware such as dishwashers and so on. When some eight or so years ago my mother’s dishwasher died, we were mostly strapped on cash (but we were, at the time, still a family of four, so the dishwasher was handy for the time saving), so we ended up buying a €300 dishwasher on heavy discounts when a new hardware store just opened. Over the next four years, we had to have it repaired at least three times, which brought its TCO (without accounting for soap and supplies) to at least €650.

At the fourth time it broke, I was just back from my experience in Los Angeles, and thus I had the cash to buy a good dishwasher, for €700. Four years later the dishwasher is working fine, no repair needed. It needs less soap, too, and it has a significantly higher energy rating than the one we had before. Win! But I was lucky I could afford it at the time.

There are ways around this: paying things by instalments is one of these, but not everybody is eligible to that either. In my case at the time I was freelancing, which means that nobody would really give me a loan for it. The best I could have done would have been using my revolving credit card to pay for it, but let me just tell you that the interests compound much faster on that than with a normal loan. Flexibility costs.

This, by the way, relate to the same toilet paper study I have referenced yesterday.

Why do you need such a special device? There are cheaper smartphones out there, change provider! This is a variation of the the argument above. Three UK, like most of their Three counterparts across Europe, is a bit peculiar, because you cannot use normal GSM phones with them, you need at least UMTS. For this reason you need more expensive phones than your average Nokia SIM-free. So arguing that using a different provider may be warranted if all you care about is calls and text, but nowadays that is not really the case.

I’m now failing to find a source link of it, but I have been reading this not too long ago (likely on the Wall Street Journal or New York Times, as those are the usual newspapers I read when I’m at a hotel) how for migrants the importance of Internet-connected mobile phones is significant. The article listed a number of good reasons, among which I remember being able to access the Internet to figure out what kind of documents/information they need, being able to browse available jobs opening, and of course to be able to stay in touch with their family and friends that may well be in different countries.

Even without going to the full extreme of migrants who just arrived in a country, there are a number of “unskilled” job positions that are effectively “at call” — this is nothing new, the whole are of Dublin where I live now, one of the most expensive in the city, used to be a dormitory for dock workers, who needed to be as close as possible to the docks themselves so that they could get there quickly in the morning to find job. “Thanks” to technology, physical home proximity has been replaced with reachability. While GSM and SMS are actually fairly reliable, having the ability to use WiFi hotspots to receive text and SMS (which a smartphone allows, but a dumbphone doesn’t) is a significant advantage.

An aside on the term “unskilled” — I really hate the term. I have been told that delivering and assembling furniture is an unskilled job, I would challenge my peers to bring so many boxes inside an apartment as quickly as the folks who delivered my sofa and rest of furniture a few months ago without damaging either the content of the boxes or the apartment, except I don’t want to ruin my apartment. It’s all a set of different skills.

Once you factor in this, the “need” for a smartphone clearly outweighs the cheapness of a SIM-free phone. And once you are in for a smartphone, having a provider that does not nickel and dime your allowances is a plus.

Hopefully now this is enough social philosophy for the post — it’s not really my field and I can only trust my experience and my instincts for most of it.

So why are there not more ROMs for these devices? Well the first problem is that it’s a completely different set of skills, for the most part, between the people who would need those ROMs and the people who can make those ROMs. Your average geek that has access to the knowledge and tools to figure out how the device works and either extract or build the drivers needed is very unlikely to do that on a cheap, underpowered phone, because they would not be using one themselves.

But this is just the tip of the iceberg, as that could be fixed by just convincing a handful of people who know their stuff to maintain the ROM for these. The other problem with cheap device, and maybe less so with Huawei than others, for various reasons, is that the manufacturer is hard to reach, in case the drivers could be available but nobody has asked. In Italy there is a “brand” of smartphones that prides itself in advertisement material that they are the only manufacturer in Italy — turns out the firmware, and thus most likely the boards too, are mostly coming from random devshops in mainland China, and can be found in fake Samsung phones in that country. Going through the Italian “manufacturer” would lead to nothing if you need specs or source code. [After all I’ve seen that for myself with a different company before.

A possible answer to this would be to mandate better support for firmware over time, fining the manufacturers that refuse to comply with the policy. I heard this proposed a couple of times, particularly because of the recent wave of IoT-based DDoS that got to the news so easily. I don’t really favour this approach because policies are terrible to enforce, as it should be clear by now to most technologists who dealt with leaks and unhashed passwords. Or with certificate authorities. It also has the negative side effect of possibly increasing the costs as the smaller players might actually have a hard time to comply with these requirements, and thus end up paying the highest price or being kicked out of the market.

What I think we should be doing, is to change our point of view on the Free Software world and really become, as the organization calls itself software in the public interest. And public interest does not mean limiting to what the geeks think should be the public interest (that does, by the way, include me.) Enforcing the strict GPL has become a burden to so many companies by now, that most of the corporate-sponsored open source software nowadays is released under Apache 2 license. While I would love an ideal world in which all of the free software out there is always GPL and everybody just contributes back at every chance, I don’t think that is quite so likely, so let’s accept that and be realistic.

Instead of making it harder for manufacturers to build solutions based on free and open source software, make it easier. That is not just a matter of licensing, though that comes into play, it’s a matter of building communities with the intent of supporting enterprises to build upon them. With all the problems it shows, I think at least the Linux Foundation is trying this road already. But there are things that we can all do. My hopes are that we stop the talks and accusations for and against “purity’ of free software solutions. That we accept when a given proposal (proprietary, or coming out a proprietary shop) is a good idea, rather than ignore it because we think they are just trying to do vendor lock-in. Sometimes they are and sometimes they aren’t, judge ideas, formats, and protocols on their merits, not on who propose them.

Be pragmatic: support partially closed source solutions if they can be supported or supportive of Free Software. Don’t buy into the slippery slope argument. But strive to always build better open-source tool whenever there is a chance.

I’ll try to write down some of my preferences of what we should be doing, in the space of interaction between open- and closed-source environments, to make sure that the users are safe, and the software is as free as possible. For the moment, I’ll leave you with a good talk by Harald Welte from 32C3; in particular at the end of the talk there is an important answer from Harald about using technologies that already exist rather than trying to invent new ones that would not scale easily.

Abbot FreeStyle Libre, the mobile app

You may remember I reviewed the Abbott FreeStyle Libre and even tried reverse engineering its protocol. One of the things that I did notice from the beginning is that the radio used by the sensors themselves is compatible with the NFC chip in most cellphones.

This was not a surprise to everybody; indeed I was pointed at a website (which I refuse to link) of Nightscouters (self-proclaimed people who “are not waiting”), which in turn pointed to an unofficial Android app that is able to read that data. The app is unsafe though (thus the lack of links), as it does not wait the first hour to get a reading, and allows to keep reading the sensor after the two weeks provided by the manufacturer. Their excuse is that you should be able to get the data and judge by yourself; I don’t like that answer, particularly as the raw readings need to be normalized by the Abbott reading device for it to have a valid reading.

Indeed, the various groups I see proclaiming “they are not waiting”, are vastly interested in the idea of quantified self, rather than having an actual health need for this data. They would be harmless, if they didn’t suggest using similarly risky methods to gain information that could be used to make medical decision. But this is a topic for another time.

Instead, a few months ago, Abbott themselves sent out an email to the register users that they made an official Android app available. Called LibreLink, and developed by a company identified as AirStrip Technologies based in San Antonio, Texas, this app is able to initialize and read a FreeStyle Libre sensor from an NFC-capable Android phone. It can be used to either replace the original Abbott reader device, or in addition to it.

I have tried it as soon as I could, which meant not right as I received the mail: the app is only able to read a sensor that it either initialize by itself, or one that was scanned during the one hour window in which the readings are not available. I suppose this has something to do with the pseudo-calibration that the FreeStyle Libre system provides.

Having the ability to take my readings from the phone turned out to be a good thing. While I would still try to keep the readings on the official device, which allows downloading them to a computer, and provides a full PDF I can send my doctor with the history, it allowed me to forget my reader at home for a day, and not having to worry, or being able to take a reading during a meeting in which I did not think to take the reader with me. It also provides more information than the reader itself, including what Abbott calls the Ambulatory Glucose Profile (AGP), which ends up being a graph of 10-90 percentiles, 25-75 percentiles, and median.

Aside: turns out that getting used to read percentile graphs, which is something my dayjob trained me to do, made it much more obvious to me what the AGP graphs were; I had to correct the Diabetes Ireland website, that listed it as a set of superimposed graphs. It took me a good half-hour to realize that it was just obvious to me due to my work, which is totally unrelated to health-care, but it would not be so to a doctor who has not seen that kind of monitoring before. Cross-discipline information exchange is good.

Unfortunately, it seems like the app likes to share data with its developers. This makes sense to a point: science is progressing, and they want to know how exactly it makes your life different, does it make it easier to keep your sugar within limits? Does it make it more likely to get sugar-lows, rather than highs? These are all good questions. I have not tried investigating (yet) whether the app shares this data over a properly encrypted channel or if it’s an actual privacy risk. I might better not know, since it does scare me a bit, but I guess I should do that at some point.

Again as an aside, I did get myself an Android phone capable of NFC, for the sole reason of being able to use that app while inspecting its traffic. The phone I used previously for that is my corporate phone, so I would not be able to fiddle with the traffic dumping on it. But I guess I’ll have to find more time for that, nowadays.

While I have been happy that the app was around, it was not perfect either. In particular I have a bad feeling that it might have something to do with two sensors failing on me with an error I did not see in the eight months before the app was introduced. I have made the mistake of not contacting Abbott right away about them, which meant I thrown them out without sending them for inspection, I have since realized my mistake but could not risk causing more failures until about this month, since I’ve been traveling a bit, and it takes me about a week or two to get new sensors in Dublin.

The reason why I suspect this is related to the app is not just that I didn’t get those kind of errors before I started using it, but also because I had read of similar failures happening when using the unofficial app suggested by Nightscouters, though in that case it’s said to be dependent on the NFC chip used by the cellphone. The two things together made me suspicious.

I’m actually told (by both my doctor and Diabetes Ireland) that Abbott is meant to start selling the device (and sensors) in Ireland before next month. Right now they do have an Irish website, which is more than they had when I started looking into it. I do not know as of yet, whether my long-term illness coverage includes these sensors or if I’ll have to keep paying for them myself (at least the falling Sterling is helping), but either way it’s going to be simpler to procure them if one of them fails.

As a final note, I should probably write down my own impressions when the sensor failed, compared to the article Daniel pointed me at back when I posted about my experimentation with this CGM (pardon, flash glucose monitoring system.) Although the first time this happened, it was not due to the mobile app; instead the application packet for the sensor failed, and I “primed” a sensor without a needle.

This happened when I was in Lisbon on vacation, and I only noticed after the hour passed, and I was queuing up for a train ticket to Cascais. I’ll admit it: I started panicking. I did not plan for it and I did not want to be out and about alone without the sensor. Part of the problem was to be found in me not having the normal glucometer with me, so having absolutely no way to make sure I was okay. The panic probably would have limited itself had I managed to get a ticket in less than twenty minutes. At the end I gave up when I got to the machine and it refused to accept either my cash (too wrinkly) or my card (too foreign.) So I turned around, went back to the hotel and put a new sensor (yes I traveled with a spare.)

Would I have panicked like in the linked article? Maybe, but I would like to think not. When the sensor actually failed me at home, this time indeed possibly due to the app, I didn’t have a spare sensor at home, and that made me angry. But at the same time, I knew it was not that important, I lived without it before, and I can live without it now, it’s just more inconvenient. What I did was putting a new cassette into the Accu-Chek which is significantly nicer to bring around than the kit for the Libre-as-glucometer, and ordered more sensors. I ended up staying a week without the Libre; it bothered me but I didn’t change much of my habits: the Libre trained me on how to relate my sensory experience with blood sugar, and what I can and cannot eat when, so I felt fairly safe overall.

FOSDEM and the unrealistic IPv6-only network

Most of you know FOSDEM already, for those who don’t, it’s the largest Free and Open Source Software focused conference in Europe (if not the world.) If you haven’t been to it I definitely suggest it, particularly because it’s a free admission conference and it always has something interesting to discuss.

Even though there is no ticket and no badge, the conference does have free WiFi Internet access, which is how the number of attendees is usually estimated. In the past few years, their network has also been pushing the envelope on IPv6 support, first providing a dualstack network when IPv6 was fairly rare, and in the recent (three?) years providing an IPv6-only network as the default.

I can see the reason to do this, in the sense that a lot of Free Software developers are physically at the conference, which means they can see their tools suffer in an IPv6 environment and fix them. But at the same time, this has generated lots of complaints about Android not working in this setup. While part of that noise was useful, I got the impression this year that the complaints are repeated only for the sake of complaining.

Full disclosure, of course: I do happen to work for the company behind Android. On the other hand, I don’t work on anything related at all. So this post is as usual my own personal opinion.

The complaints about Android started off quite healthy: devices couldn’t actually connect to an IPv6 dual-stack network, and then they couldn’t connect to a IPv6-only network. Both are valid complaints to begin with, though there is a bit more to it. This year in particular the complaints were not so healthy because current versions of Android (6.0) actually do support IPv6-only networks, though most of the Android devices out there are not running this version, either because they have too old hardware or because the manufacturer has not released a new build yet.

What does tick me though has really nothing to do with Android, but rather with the idea that people have that the current IPv6-only setup used by FOSDEM is a realistic approach to IPv6 networking — it really is not. It is a nice setup to test things out and stress the need for proper support for IPv6 in tools, but it’s very unlikely to be used in production by anybody as is.

The technique used (at least this year) by FOSDEM is NAT64. To oversimplify how this works, it is designed to modify the DNS replies when resolving hostnames so that they always provide an IPv6 address, even though they would only have A records (IPv4 addresses). The IPv6 addresses used would then map back to IPv4, and the edge router would then “translate” between the two connections.

Unlike classic NAT, this technique requires user-space components, as the kernel uses separate stacks for IPv4 and IPv6 which do not allow direct message passing between the two. This makes it complicated and significantly slower (you have to copy the data from kernel to userspace and back all the time), unless you use one of the hardware router that are designed to deal with this (I know both Juniper and Cisco have those.)

NAT64 is a very useful testbed, if your target is figuring out what in your stack is not ready for IPv6. It is not, though, a realistic approach for consumer networks. If your client application does not have IPv6 support, it’ll just fail to connect. If for whatever reason you rely on IPv4 literals, they won’t work. Even worse, if the code allows a connection to be established over IPv6, but relies on IPv4 semantics for things like logging, or (worse) access control, then you now have bugs, crashes or worse, vulnerabilities.

And while fuzzing and stress-testing are great for development environments, they are not good for final users. In the same way -Werror is a great tool to fix your code, but uselessly disrupts your users.

In a similar fashion, while IPv6-only datacenters are not that uncommon – Facebook (the company) talked about them two years ago already – they serve a definite different purpose from a customer network. You don’t want, after all, your database cluster to connect to random external services that you don’t control — and if you do control the services, you just need to make sure they are all available over IPv6. In such a system, having a single stack to worry about simplifies, rather than complicate, things. I do something similar for the server I divide into containers: some of them, that are only backends, get no IPv4 at all, not even in NAT. If they ever have to go fetch something to build on the Internet at large, they go through a proxy instead.

I’m not saying that FOSDEM setting up such a network is not useful. It actually hugely is, as it clearly highlights the problems of applications not supporting IPv6 properly. And for Free Software developers setting up a network like this might indeed be too expensive in time or money, so it is a chance to try things out and iron out bugs. But at the same time it does not reflect a realistic environment. Which is why adding more and more rant on the tracking Android bug (which I’m not even going to link here) is not going to be useful — the limitation was known for a while and has been addressed on newer versions, but it would be useless to try backporting it.

For what it’s worth, what is more likely to happen as IPv6 adoption needs to happen, is that providers will move towards solutions like DS-Lite (nothing to do with Nintendo), which couples native IPv6 with carrier-grade NAT. While this has limitations, depending on the size of the ISP pools, it is still easier to set up than NAT64, and is essentially transparent for customers if their systems don’t support IPv6 at all. My ISP here in Ireland (Virgin Media) already has such a setup.

Yes we still needs autotools

One of the most common refrains that I hear lately, particularly when people discover Autotools Mythbuster is that we don’t need autotools anymore.

The argument goes as such: since Autotools were designed for portability on ancient systems that nobody really uses anymore, and that most of the modern operating systems have a common interface, whether that is POSIX or C99, the reasons to keep Autotools around are minimal.

This could be true… if your software does nothing that is ever platform specific. Which indeed is possible, but quite rare. Indeed, unpaper has a fairly limited amount of code in its configure.ac, as the lowest level code it has, it’s to read and write files. Indeed, I could have easily used anything else for the build system.

But on the other hand, if you’re doing anything more specific, which usually includes network I/O, you end up with a bit more of a script. Furthermore, if you don’t want to pull a systemd and decide that the latest Linux version is all you want to support, you end up having to figure out alternatives, or at least conditionals to what you can and cannot use. You may not want to do like VLC which supports anything between OS/2 and the latest Apple TV, but there is space between those extremes.

If you’re a library, this is even more important. Because while it might be that you’re not interested in any peculiar systems, it might very well be that one of your consumers is. Going back to the VLC example, I have spent quite a bit of time in the past weekends of this year helping the VLC project by fixing (or helping to fix) the build system of new libraries that are made a dependency of VLC for Android.

So while we have indeed overcome the difficulties of porting across many different UNIX flavours, we still have portability concerns. I would guess that it is true that we should reconsider what Autoconf tests for by default, and in particular there are some tests that are not completely compatible for modern systems (for instance the endianness tests were an obvious failure when MacIntel arrived, as then it would be building the code for both big endian (PPC) and little endian (Intel) — on the other hand, even these concerns are not important anymore, as universal binaries are already out of style.

So yes, I do think we still need portability, and I still think that not requiring a tool that depends on XML RPC libraries is a good side of autotools…

TEXTRELs (Text Relocations) and their impact on hardening techniques

You might have seen the word TEXTREL thrown around security or hardening circles, or used in Gentoo Linux installation warnings, but one thing that is clear out there is that the documentation around this term is not very useful to understand why they are a problem. so I’ve been asked to write something about it.

Let’s start with taking apart the terminology. TEXTREL is jargon for “text relocation”, which is once again more jargon, as “text” in this case means “code portion of an executable file.” Indeed, in ELF files, the .text section is the one that contains all the actual machine code.

As for “relocation”, the term is related to dynamic loaders. It is the process of modifying the data loaded from the loaded file to suit its placement within memory. This might also require some explanation.

When you build code into executables, any named reference is translated into an address instead. This includes, among others, variables, functions, constants and labels — and also some unnamed references such as branch destinations on statements such as if and for.

These references fall into two main typologies: relative and absolute references. This is the easiest part to explain: a relative reference takes some address as “base” and then adds or subtracts from it. Indeed, many architectures have a “base register” which is used for relative references. In case of executable code, particularly with the reference to labels and branch destinations, relative references translate into relative jumps, which are relative to the current instruction pointer. An absolute reference is instead a fully qualified pointer to memory, well at least to the address space of the running process.

While absolute addresses are kinda obvious as a concept, they are not very practical for a compiler to emit in many cases. For instance, when building shared objects, there is no way for the compiler to know which addresses to use, particularly because a single process can load multiple objects, and they need to all be loaded at different addresses. So instead of writing to the file the actual final (unknown) address, what gets written by the compiler first – and by the link editor afterwards – is a placeholder. It might sound ironic, but an absolute reference is then emitted as a relative reference based upon the loading address of the object itself.

When the loader takes an object and loads it to memory, it’ll be mapped at a given “start” address. After that, the absolute references are inspected, and the relative placeholder resolved to the final absolute address. This is the process of relocation. Different types of relocation (or displacements) exists, but they are not the topic of this post.

Relocations as described up until now can apply to both data and code, but we single out code relocations as TEXTRELs. The reason for this is to be found in mitigation (or hardening) techniques. In particular, what is called W^X, NX or PaX. The basic idea of this technique is to disallow modification to executable areas of memory, by forcing the mapped pages to either be writable or executable, but not both (W^X reads “writable xor executable”.) This has a number of drawbacks, which are most clearly visible with JIT (Just-in-Time) compilation processes, including most JavaScript engines.

But beside JIT problem, there is the problem with relocations happening in code section of an executable, too. Since the relocations need to be written to, it is not feasible (or at least not easy) to provide an exclusive writeable or executable access to those. Well, there are theoretical ways to produce that result, but it complicates memory management significantly, so the short version is that generally speaking, TEXTRELs and W^X techniques don’t go well together.

This is further complicated by another mitigation strategy: ASLR, Address Space Layout Randomization. In particular, ASLR fully defeats prelinking as a strategy for dealing with TEXTRELs — theoretically on a system that allows TEXTREL but has the address space to map every single shared object at a fixed address, it would not be necessary to relocate at runtime. For stronger ASLR you also want to make sure that the executables themselves are mapped at different addresses, so you use PIE, Position Independent Executable, to make sure they don’t depend on a single stable loading address.

Usage of PIE was for a long while limited to a few select hardened distributions, such as Gentoo Hardened, but it’s getting more common, as ASLR is a fairly effective mitigation strategy even for binary distributions where otherwise function offsets would be known to an attacker.

At the same time, SELinux also implements protection against text relocation, so you no longer need to have a patched hardened kernel to provide this protection.

Similarly, Android 6 is now disallowing the generation of shared objects with text relocations, although I have no idea if binaries built to target this new SDK version gain any more protection at runtime, since it’s not really my area of expertise.

Impressions of Android Wear in everyday life

All readers of this blog know I’m a gadgeteer, by now. I have been buying technogizmos at first chance if I had the money for it, and I was thus an early adopter of ebooks back in the days. I have, though, ignored wearables for various reasons.

Well, it’s not strictly true — I did try Google Glass, in the past year. Twice, to be precise. Once the “standard” version, and once a version with prescription lenses – not my lenses though, so take it with a grain of salt – and neither time it excited me. In particular the former wouldn’t be an option due to my need for prescription glasses, and the latter is a terrible option because I have an impression that the display is obstructing too much of the field of vision in that configuration.

_Yes, I know I could wear contact lenses, but I’m scared of them so I’m not keeping them in mind. I’m also saving myself the pain in the eye for when smart contact lenses will tell me my blood glucose levels without having to prick myself every day._

Then smartwatches became all the rage and a friend of mine actually asked me whether I was going to buy one, since I seemed to be fond of accessories… well, the truth is that I’m not really that fond of them. It just gives the impression because I always have a bag on me and I like hats (yup even fedoras, not trilbies, feel free to assassinate my character for that if you want.)

_By the way, the story of how I started using satchels is fun: when I first visited London, I went with some friends of mine, and one of the things that we intended on doing was going to the so-called Gathering Hall that Capcom set up for players of Monster Hunter Freedom Unite. My option to bring around the PSP were pants’ pockets or a cumbersome backpack — one of my friends just bought a new bag at a Camden Town stall which instead fit the PSP perfectly, and he had space to make the odd buy and not worry where to stash it. I ended up buying the same model in a different colour._

Then Christmas came and I got a G Watch as a gift. I originally wanted to just redirect it to my sister — but since she’s an iPhone user that was not an option, and I ended up trying it out myself. I have to say that it’s an interesting gadget, which I wouldn’t have bought by myself but I’m actually enjoying.

The first thing you notice when starting to use it is that its main benefit is stopping you from turning on your phone display — because you almost always do it for two reasons: check the time and check your notifications, both things you can do by flicking your wrist. I wonder if this can be count as security, as I’ve been “asked the time” plenty of times around Dublin by now and I would like to avoid a repeat.

Of course during the day most of the phone’s notifications are work-related: email asking me to do something, reminders about meetings, alerts when I’m oncall, … and in that the watch is pretty useful, as you can silence the phone and rather have the watch “buzz” you by vibrating — a perfect option for the office where you don’t want to disturb everybody around you, as well as the street where the noise would make it difficult to hear the notification sounds — even more when you stashed the phone in your bag as I usually do.

But the part that surprised me the most as usefulness is using it at home — even though things got a bit trickier there as I can’t get a full coverage of the (small) apartment I rent. On the other hand, if I leave the phone on my coffee table from which I’m typing right now, I can get full coverage to the kitchen, which is what makes it so useful at home for me: I can set the timer when cooking, and I have not burnt anything since I got the watch — yes I’m terrible that way.

Before I would have to either use Google Search to set the alarm on one of the computers, or use the phone to set it — the former tends to be easily forgotten and it’s annoying to stop when focusing on a different tab/window/computer, the latter require me to unlock to set up the timer, and while Google Now on any screen should be working, it does not seem to stick for me. The watch can be enabled by a simple flick of the wrist, and respond to voice commands mostly correctly (I still make the mistake of saying «set timer to 3 minutes» which gets interpreted as «set timer 23 minutes»), and is easy to stop (just palm it off).

I also started using my phone to play Google Play Music on the Chromecast so I can control the playback from the phone itself — which is handy when I get a call or a delivery at the door, or whatever else. It does feel like living in the future if I can control whatever is playing over my audio system from a different room.

One thing that I needed to do, though, was replace the original plastic strap. The reason is very much personal but I think it might be a useful suggestion to others to know that it is a very simple procedure — in my case i just jumped into a jewelry and asked for a leather strap, half an hour later they had my watch almost ready to go, they just needed to get my measure to open the right holes in it. Unlike the G Watch R – which honestly looks much better both on pictures and in real life, in my opinion much better than the Moto 360 too, as the latter appears too round to me – the original G Watch has a standard 22mm strap connector, which makes it trivial to replace for a watch repair shop.

With the new strap, the watch is almost weightless to me, partly because the leather is lighter than the plastic, partly because it does not stick to my hair and pull me every which way. Originally I wanted a metal strap, honestly, because that’s the kind of watches I used to wear — but the metal interferes with Bluetooth reception and that’s poor already as is on my phone. It also proves a challenge for charging as most metal straps are closed loops and the cradle needs to fit in the middle of it.

Speaking of reception, I have been cursing hard about the bad reception even at my apartment — this somehow stopped the other day, and only two things happened when it improved: I changed the strap and I kicked the Pear app — mostly because it was driving me crazy as it kept buzzing me that the phone was away and back while just staying in my pocket. Since I don’t think, although I can’t exclude, that the original strap was cause for the bad reception, I decided that I’m blaming the Pear app and not have it on my phone any more. With better connectivity, better battery life came, and the watch was able to reach one and a half full days which is pretty good for it.

I’m not sure if wearables are a good choice for the future — plenty of things in the past thought they were here to stay. This is by far not the first try to make a smart watch of course, I remember those that would sync with a PC by using video interference. We’ll see what it comes down to. For the moment I’m happy for the gift I received — but I’m not sure if I would buy it myself if I had to.