Diabetes control and its tech; should I build a Chrome app?

Now that I can download the data the data from three different glucometers, of two different models with my glucometer utilities I’ve been pondering on ways to implement a print out or display mode. While the text output is good if you’re just looking into how your different readings can be, when you have more than one meter it gets difficult to follow it across multiple outputs, and it’s not useful for your endocrinologist to read.

I have a working set of changes that add support for a sqlite3 backend for the tool, so you can download from multiple meters and then display them as belonging to a single series — which works fine if you are one person with multiple meters, rather than multiple users each with their glucometer. On the other hand, this made me wonder if I’m spending time working in the right direction.

Even just adding support for storing the data locally had me looking into adding two more dependencies (SQLite3 support, which yes, comes with Python, but needs to be enabled), and pyxdg (with a fallback) to make sure to put the data in the right folder, to simplify backups. Having a print-out or an UI that can display the data over time would add even more dependencies, meaning that the tool would be only really useful if packaged by distributions, or if binaries were to be distributed. While this would still give you a better tool on non-Windows OSes, compared to having no tool at all if left to LifeScan (the only manufacturer currently supported), it still is limiting.

In a previous blog post I was musing the possibility of creating an Android app that implements these protocols. This would mean already reimplementing them from scratch, as running Python on Android is difficult, so rewriting the code in a different language, such as Java or C#, would be needed — indeed, that’s why I jumped on the opportunity to review PacktPub’s Xamarin Mobile Application Development for Android, which will be posted here soon; before you fret, no I don’t think that using Xamarin for this would work, but it was still an instructive read.

But after discussing with some colleagues, I had an idea that is probably going to give me more headaches than writing an app for Android, and at the same time be much more useful. Chrome has a serial port API – in JavaScript, of course – which can be used by app developers. I don’t really look forward to implement the UltraEasy/UltraMini protocol in JavaScript (the protocol is binary structs-based, while the Ultra2 should actually be easy as it’s almost entirely 7-bit safe), but admittedly that solves a number of problems: storage, UI, print out, portability, ease-of-use.

Downloading that data to a device such as the Chromebook HP11 would also be a terrific way to make sure you can show the data to your doctor — probably more so than on a tablet, definitely more than on a phone. And I know for a fact that ChromeOS supports PL2303 adapters (the chipset used by the LifeScan cable that I’ve been given). The only problem with such an idea is that I’m not sure how HTML5 offline storage is synced with the Google account, if at all — if I am to publish the Chrome app, I wouldn’t want to have to deal with HIPPA.

Anyway, for now I’m just throwing the idea around, if somebody wants to start before me, I’ll be happy to help!

Apple’s Biggest Screwup — My favourite contender

Many people have written about Apple’s screwups in the past — recently, the iPad Mini seems to be the major focus for about anybody, and I can see why. While I don’t want to argue I know their worst secret, I’m definitely going to show you one contender that could actually be fit to be called Apple’s Biggest Screwup: OS X Server.

Okay, those who read my blog daily already know that because I blogged this last night:

OS X Server combines UNIX’s friendliness, with Windows’s remote management capabilities, Solaris’s hardware support, and AIX software availability.

So let’s try to dissect this.

UNIX friendliness. This can be argued both positively and negatively — we all know that UNIX in general is not very friendly (I’m using the trademarked name because OS X is actually using FreeBSD which is UNIX), but it’s friendlier to have an UNIX server than a Windows one. So if you want to argue it negatively, you don’t have all the Windows-style point-and-click tools for every possible service. If you want to argue it positively, you’re still running solid (to a point) software such as Apache, BIND, and so on.

Windows’s remote management capabilities. This is an extremely interesting point. While, as I just said, OS X Server provides you with BIND as DNS server, you’re supposed not to edit the files by hand but leave it to Apple’s answer to the Microsoft Management Console — ServerAdmin. Unfortunately, doing so remotely is hard.

Yes because even though it’s supposed to be usable from a remote host, it requires that you’re using the same version on both sides, and that is impractical if your server is running 10.6, and your only client at hand is updated to 10.8. So this option has to be dropped entirely in most cases — you don’t want to keep updating to the latest OS your server, but you do so for your client, especially if you’re doing development on said client. Whoops.

So can you launch it through an SSH session? Of course not, because despite all people complaining about X11, the X protocol, and SSH X11 forwarding, are a godsend for remote management, if you have things like very old versions of libvirt and friends, or some other tool that can only be executed on a graphical environment, you only need another X with an SSH client and you’re done.

Okay so what can you do? Well, the first option would be to do it locally on the box, but that’s not possible, so the second best would be to use one of the many remote desktop techniques — OS X Server comes with Apple’s Remote Desktop server by default. While this is using the VNC standard 5900 port… it seems like it does not work with a standard VNC client such as KRDC. You really need Apple’s Remote Desktop Client, which is a paid-for proprietary app. Of course you can set up one of many third party apps to connect to it, but if you didn’t think about that when installing the server, you’re basically stuck.

And I’m pretty sure that this does not limit itself to the DNS server, but Apache, and other servers, will probably have the same issues.

Solaris’s hardware support. This should be easy, if you ever tried to run Solaris on real hardware, rather than just virtualized – and even then … – you know that it’s extremely picky. Last time I tried it, it wouldn’t run on a system with SATA drives, to give you an idea.

What hardware can OS X Server run on? Obviously, only Apple hardware. If you’re talking about a server, you have to remove from the equation all their laptops, obviously. If it’s a local server you could use an iMac, but the problem I’ve got is that it’s used not locally but at a co-location. The XServe, which was the original host for OS X Server, is now gone forever, and that leaves us with only two choices: Mac Pro and Mac Mini. Which are the only ones that are sold with that version of OS X anyway.

The former hasn’t been updated in quite a long time. It’s quite bulky to put at a co-location, even though I’ve seen enough messy racks to know that somebody could actually think about bringing it there. The latter actually just recently got an update that makes it sort of interesting, by giving you a two-HDDs option…

But you still get a system that has 2.5”, 5400 RPM disks at most, with no RAID, and that’s telling you to use external storage if you need anything difference. And since this is a server edition, it comes with no mouse or keyboard, just adding those means adding another $120. Tell me again why would anybody in their sane mind use one of those for a server? And no don’t remind that I could have an answer on the tip of my tongue.

For those who might object that you can fit two Mac Minis on 1U – you really can’t, you need a tray and you end up using 2U most of the time anyway – you can easily use something like SuperMicro’s Twins that fits two completely independent nodes on a single 1U chassis. And the price is not really different.

The model I linked is quote, googling, at around eighteen hundreds dollars ($1800); add $400 for four 1TB hard disks (WD Caviar Black, that’s their going price as I ordered, since last April, eight of them already — four for Excelsior, four for work), you get to $2200 — two Apple Mac Minis? $2234, with mouse and keyboard that you need (the Twin system has IPMI support and remote KVM, so you don’t need them).

AIX’s software availability. So yes, you can have MacPorts, or Gentoo Prefix, or Fink, or probably a number of other similar projects. The same is probably true for AIX. How much software is actually tested on OS X Server? Probably not much. While Gentoo Prefix and MacPorts cover most of the basic utilities you’d use on your UNIX workstation, I doubt that you’ll find the complete software coverage that you currently find for Linux, and that’s often enough a dealbreaker.

For example, I happen to have these two Apple servers (don’t ask!). How do I monitor them? Neither Munin nor NRPE are easy to set up on OS X so they are yet unmonitored, and I’m not sure if I’ll ever actually monitor them. I’d honestly replace them just for the sake of not having to deal with OS X Server anymore, but it’s not my call.

I think Apple did quite a feat, to make me think that our crappy HP servers are not the worst out there…

IPv6 in the workplace

I noted last week that for some reason I couldn’t understand, for some website the access time was quite lower on IPv6 than it was on IPv4. This seems to be consistent within the network as well, even though I’m still not sure if it’s a matter of a smaller overhead incurred in IPv6 itself, or if it’s mostly because the router in that case doesn’t have to do the same level of connection tracking for NAT and PAT.

But it’s not all clear this way: while NetworkManager is pretty happy with finding out both the address and the DNS server advertised with radvd, neither Mac OS X (10.5, 10.6 and 10.8) nor Windows (7) could get the DNS server. This is known and the only solution for this is to use a hybrid network with the stateless autoconfiguration (radvd) and DHCPv6 for extra information (NTP and DNS servers, among others).

So I first tried to set up ISC DHCP to serve out the v6 information, since that was the DHCP server I was using already. But this is extremely cumbersome. The first problem is that you can’t actually have one single dhcpd process running and serve both DHCP and DHCPv6, even though they use different ports, so you have to make use of dhcpd’s init script multiplexing support. Okay, not that big a deal is it? Strike two is that the configuration file can’t be shared either, even though the option names are different between the two implementations. What?

Okay so multiplexed init scripts, and separate configuration files. Is that all? It should, but honestly I’ve been unable to get it to work. I’m not sure if I just screwed up the configuration or what else, but it was trouble. Add to that that you have no way with the current init script to just reload the configuration, but you actually have to restart the service (and there is no configuration check on stop, which means you might take your DHCP down), and the fact that the man page for dhcpd.conf does not list most of the IPv6 options and I got tired.

Luckily for me, net-dns/dnsmasq (which we’re already using to serve as local DNS — I used unbound before, but in this case it seemed much easier as we need local hostnames, whereas at my house I simply used public IPv6 addresses) supports both DHCP and DHCPv6, responds to both with the same process, and supports a reload command. More interestingly, it seems like it could take over the job that right now is handled by radvd for router advertisement, but I haven’t tried that yet.

With this change, finally, I was able to get Windows 7 and Mac OS X to make DNS requests to the router’s IPv6 address, in the hope that this improves the general network’s responsiveness (at a first glance it seems to be working). So I started checking over the various systems we have in the office what supports what, testing also with test-ipv6

  • Windows 7 now gets both IPv6 addresses (temporary and mac-based) and DNS servers; test results 1010;
  • Mac OS X Mountain Lion gets the stateless IPv6 address as well as the DNS server; test results 1010;
  • Mac OS X Snow Leopard gets the IPv6 address but doesn’t see the DNS server, in either way; test results 1010;
  • Linux gets the IPv6 address and the DNS server; test results 1010;
  • Windows XP (after adding the protocol manually, of course) does not let you see which IP addresses it has, so I don’t know if it gets the DNS right, but it seems to work; test results 1010;
  • Kindle Fire (first generation) does not show you the addresses it got, but tests pass 1010 so I assume it’s working;
  • iPhone, running iOS 5 (colleague of mine) doesn’t show the addresses but tests also pass 1010;
  • iPad, running iOS 6 (mine) shows the IPv6 DNS address, but tests don’t pass, 0/10;
  • Desire HD (CyanogenMod 7) doesn’t show any address, and tests don’t pass 0/10.

Something seems to be extremely wrong with these results honestly, but I’m not yet sure what.

Unfortunately, I haven’t had time to experiment with Flash and Red5 to see if there is any reason why we should work on supporting IPv6 in our products yet (if those two components don’t support it yet, there’s no real reason for us to look into it for now), but in the mean time, the advantages to start moving to IPv6 start to show themselves pretty clearly..

Automator and AppleScript: the Bad and the Ugly

As you see, I didn’t find any good here. This is going to be a bit of a personal rant about Apple’s technologies, so if you don’t care about Apple at all, and you subscribe to the idea that you don’t have to know what your competitors are up to, then you’re suggested not to read the post.

Situation — quite a long one, so feel free to skip.

If you follow my blog with stability you probably remember the print shop for which I had to work around NATs and which pays me about a fifth of what I’m worth — mostly because I still consider them friends. A couple of months ago they moved from their original telco and ISP (FastWeb), to the ex-monopolist telco and ISP (Telecom Italia), with quite a bit of relief from my side, as that meant dropping the whole charade of IPv6 over miredo, and just using DMZ to connected to the jumphost without caring about IPv6 dynamic hosts (which I actually had to resurrect for another customer, but let’s leave it at that).

I have kept warning my customer that they’d lose their fax number if they switched telco; but when the marketeer who sold them on the move explicitly assured them (twice) that both the fax and the phone numbers would be recovered, they decided to ignore the fact that I’ve been working in their best interest for the past few years. Turns out that I knew better: the fax number was dismissed by the original telco and lost in the transition. And faxes are (unfortunately) still a big deal in Italy.

Thankfully, I’ve been ready: a friend of mine operates a virtual telco with VoIP phone and fax lines, and I knew he had a decent service of fax to mail, and fax by mail gateway. I referred them to him quickly, and a new fax line was easily established. Or maybe not so easily. While receiving fax worked out of the box, sending them turned out to be a bit more complicated: only one PDF document at a time could be faxed, which was never an issue for me, but, most worrisome, the PDF documents produced by their (industrial) printer, a Xerox DocuPrint 700, caused the GhostScript used by the server for the conversion, to crash.

Given that fixing this on the server side is non-trivial (the crash is reproducible but it’s a very convoluted situation that causes it to happen, and Ubuntu Server has no updates for the ghostscript package), I had to find an alternative route to solve the problem. Given I’m a developer and scripter, the solution was obvious: I wanted to script the conversion of the files to something easier for GhostScript to digest, and while I was at it simplify the whole procedure by just asking the secretary for the destination fax number and the files she wanted to send.

Introducing the Bad and the Ugly

Apple provides two main technologies/techniques to script their Mac environments: the “old-fashioned” AppleScript, which is an “usual” scripting language, with the difference of being very well integrated, having a relatively natural language syntax, and being byte-compilable into applications; and since Tiger a shiny, GUI-based scripting system called Automator.

I had already a brief introduction to the former, but I needed something more to write the kind of script I wanted to provide to my customer, so I made good use of the Safari Books Online trial (I’m almost certainly going to subscribe, as it happens) and looked up a couple of titles on the topic: Learn AppleScript by Sanderson and Rosenthal, and Apple Automator with AppleScript by Myer. Then I dug into the topic for the whole afternoon.

Writing the first draft of the script was quite easy, but then I remembered that the only reason why I have GhostScript available on my laptop was because I used fink and other package managers there. The iMac used by the secretary wouldn’t have those at all (and I don’t intend on installing them just for this); plus there is no free distribution of GhostScript prebuilt for Mac, so I started looking at alternative approach.

A quick googling around can tell you that while there isn’t a direct scriptable application that can be used to merge and convert PDF documents, there is an Automator action to do so, so I started looking into that technology as well. The idea behind it is extremely cool: instead of writing code by the syntax, it allows you to build logic trains of actions with input and output. The basis for what I was needing, was actually easily described with a simple train: get the user to choose some files, transform all of them in PDF if they aren’t already; merge all of them in a single document, create a new email message with that file attached, and send it.

Of course it’s not as straightforward as it might appear at a first glance: the email needs to be fired at a particular address, for the fax to be sent, which depends on the fax itself; the produced merged PDF document needs to be stored in a temporary location and removed after the mail is properly sent, and so on. To make the tool more useful, Apple also provides support for both default and custom variables, all of it designed with drag’n’drop.

Since you probably see now that Automator at least appears cool, it is obviously not the Ugly of the story, but rather the Bad. While Automator allows to combine a series of trains, by saving results in variables and then retrieving them, it does not allow multiple inputs to an action: those strictly need to be passed as variable. But not all fields of an action can be set with a variable! Interestingly enough, you cannot set the name of the merged PDF document through a variable, although you can change the path it is generated in that way. You also cannot choose the destination of an email message through a variable, but you can change the subject. Oh and while you can choose which files to select, you have no way to restrict the selection to a given type of documents. It’s ludicrous!

So while Automator could possibly be helpful for the few people who have an idea of how (at a high level) applications work, but still have no clue about implementing one. I can’t think of many people like that, to be honest. On the other hand AppleScript is a powerful tool, which enabled me to complete the task I had to take care of quite nicely. Apple’s design of Applets and Droplets, with the ability to combine the two is one of the nicest things I have seen in a very long time. Unfortunately, it is just Ugly.

Don’t get me wrong, it’s not the syntax itself that is ugly; it actually reminds me enough of Ruby to actually be enjoyable to write. And the basic commands are easy to grasp even for one who’s not a hardcore programmer. The problem is that from a clean design that pre-dated OS X, the current situation is .. messy. Files are accessed by default using the so-called HFS path, which looks something like Macintosh HD:Users:flame:something even though the OS is now almost entirely written with Unix paths in mind; if you wish to call into the Unix toolset (which I had to, to combine the PDFs, as I’ll tell), you got to convert this to the Unix-style path.

This by itself wouldn’t be that much of a limitation; a bother yes, but not a limitation as such. What makes it difficult to deal with is that even some of Apple’s own applications expect paths to be passed Unix style! This is the case for instance of Apple Mail 3.0, which made me waste two hours guessing why oh why I could ask Preview to open the file, but I was still unable to attach it to the outgoing mail message.

The ugliest part refers to merging the PDF files; as I said, using GhostScript was easy, but having GhostScript installed isn’t, so I avoided that solution. One obvious method would have been to ask Preview to merge the PDF files, given that it can do that. But Apple didn’t make Preview scriptable by default, and I wished to avoid tinkering with settings I would most likely forget about when (not if) I’d deploy the script again somewhere else.

Turns out that the implementation for Automator’s merge action boils down to a single Python script, which can easily be launched separately (Apple even provided usage documentation!). So at the end, the conversion is done through that, even though it means that I have to call into the Unix shell to complete the task, which takes a bit of time.

This all together makes AppleScript definitely the Ugly. But at least it’s something, I guess.

I remember KDE 3 having some support for this through kdialog and DCOP, I wonder how the situation is now; as far as I can tell, even though dbus should provide an even more complete interfacing to applications, it isn’t as easy to integrate workflows without being a developer, on GNOME 2. I wonder how GNOME 3 is dealing with this.

Apple and Linux do it differently

I’m not sure why it is that so many people from one side take shots at Apple whenever they have the chance, but at the same time are totally ready to “copy” their ideas – whether or not they make sense.

You probably remember when I criticised the FatELF idea stating, black-on-white, that the idea only barely made sense for Apple as they had the knowledge and design for it already in their possession, while the required investment for Linux developers would just be unjustified compared to the positive effects to be gained by such an approach.

On a similar note, I never understood why people expected “dock-like” software to work on Linux just as well as it does on OSX. It probably has to do to the fact that they have seen OSX on screenshot, videos, maybe a local store, but never used it enough to notice how there are subtle differences between the behaviour of software between the two of them.

As far as I can see, the new Ubuntu Lucid screenshots that are being shown around the net prominently feature a “dock-like” bar on the bottom; I have no idea if that’s the default Lucid interface yet, I don’t have enough time to update my virtual machines, especially not in this moment, but I fear that if it is, they haven’t rewritten most of the software available for X11 to behave like the software for OSX does — well, most of it at least.

A friend of mine, average-time Apple user, quite the geeky kind of person, actually went as far as asking me since when Linux learnt about applications handling from OSX. As I said, I doubt it has, yet. Whether this is good or bad, it’s widely a matter of taste. Without entering a dispute that could be too long to be worth the effort, I’d like to explain what me and him are referring to in this instance. The Dock in OSX is not just a launcher for applications, it also replaces what Windows and most of the GUIs for Unix I know of call the “taskbar”, showing which applications, windows are open… you might find here the first clue that something’s off.

On Windows, as well as X11, you have a record of which windows are open. The same software (application) can have multiple processes, each with zero or more windows, but you generally track windows in the taskbar. You get around this usually by the use of the tray icons, or notification area icons. Software like Pidgin creates an icon in the tray, you can use that to show or hide the contacts’ list window, even when no other windows are open for that application/process.

A number of application domains in the years started heavy-handedly using the notification area, or system tray, to feign a “transparent” application, running in the background: instant messengers, audio players and… OpenOffice. Both in Windows (for a long time) and in Linux (just recently as far as I can tell), OpenOffice has a “quicklaunch applet” that is basically just a system tray icon that keeps an OpenOffice process running while you have no OpenOffice document open. I’m quite sure there used to be a similar “applet” for Firefox but I don’t think it ever reached Linux, and I’m not sure whether it still exists for Windows. Recounting all the possible causes and cases for this to have happened is definitely out of my reach, but I can make more than a few (in my opinion) acceptable hypothesis.

The first starts by looking at Windows, in particular the “huge” step between Windows 3.1 and Windows 95, one I have lived myself (yeah, I have used Windows 3.1, admittedly just to play games and little more, I was too young to do anything useful with it, but I recall it as well as I recall Windows 2.1!). Applications’ windows in the older 3.x series of the interface (it wasn’t really an Operating System at the time), once minimised, went straight to the desktop – the “desktop-as-a-folder” idea that we grew so used to, at the point that KDE got quite a lot of flak for touching that idea with Plasma, is something that, unless I recall incorrectly, came from the Macintosh, and only reached “mainstream” Windows systems with 95 and later – even at 640×480 the desktop was big enough to keep visible icons for many windows (it is, on the other hand, questionable wether the computers could run as many applications, but that’s a different problem here; probably they could if they were all notepads). On Windows 95 instead, the desktop became a folder, even if a slightly strange one, and the taskbar was introduced.

The taskbar was small, and you could only open a handful of windows before their title became unreadable; at that point, things like WinAMP and other background-running didn’t look like they needed the taskbar entry at all; the system tray was also present, so they decided to have this “minimise to tray” (or “ignore taskbar” option). Given that Windows lacked the idea of daemons, and that the non-NT kernels even lacked the idea of services, all the background, or long-lasting, processes added to the trayicons. The idea became so universal that when I was toying with Borland C++ Builder (with the free-as-in-beer licenses they distributed with magazines here in Italy), there were tons of components that let you add a single extra button on the title bar: “Minimise to tray”.

The same idea then came down to Linux (in general, Unices) – because of the same taskbar/trayicon present in KDE and GNOME – even though the problem tried to be resolved by adding the group-by-application feature to the taskbar. The same feature reached Windows with XP, while the NT-kernel series – mainstreaming with the same version, even more than the previous Windows 2000 – already had services, equivalent to the Unix daemons.

Let’s look back at the other “notification area icons” (yes I’m switching terminology back and forth between Windows and Linux, for one – in my opinion – very good reason: they are the same concept, the same thing; FreeDesktop tried forcing them to be different things, but they are used in the same way, and are thus the same; Ubuntu is now trying to change that once again, not sure how that will turn out): the quicklaunch applets.

Why do OpenOffice and Firefox (and Microsoft Office itself, a long time ago, I think had the same thing… maybe Office 97? I lost sight of that stuff, with the years) need such hacks? Well the answer is functionally simple: to take less time to load a document. Reduce the time-to-startup of the application and the user will be happy, but how do you do that? Make sure that application is already running, invisible to the user… in the system tray. It gets especially important in Windows because the process creation time has historically been worse there than it is (was?) in Unix, and Linux.

I’m not an expert enough to know why it is on Windows, and X11, that the applications close their process (or main process, in the case of things like Chrome) when all their windows are closed, it’s probably related to the fact that you might not want to have processes running you cannot see at all (although you’re full of such things nowadays on both environments!), but so it is: minus the quicklaunch applets named above, once you close the last OpenOffice document window, or the last Firefox window, their process is gone, and the next start up will require you to start the application from nothing (which is, obviously, pretty slow even on Unix).

This is not the case on OS X, as there you have a running application, which may or may not have windows open… when you close all the windows, the application keeps running, invisible to you if not by a glow (or an arrow on pre-Snow Leopard versions) under its icon in the dock. Keep the application with running processes, you don’t have to load them from scratch even if all their windows are closed away. Actually, I think would be very nice to have in Linux, if used as a desktop.

I heard of lots of software that tries to simulate the same behaviour, by dividing in backend and frontend applications (and thus, processes), and using D-Bus or other methods to communicate to each other. How well this is working out, I cannot judge. I certainly can find why this is an interesting approach, but until all applications behave like this (if ever!), then I don’t think “dock-like” software is apt for Linux.

This is nothing different than using the development process of applications used under OS X to Linux, as I found done with DropBox

Bye Fedora

I’m going to say goodbye to my current Fedora 12 laptop; yes the one for which I wrote that post about Fedora 10 at the time which I then updated for Fedora 11. This is not because the laptop broke down, but rather because I ended up getting my MacBook Pro fixed, and that is again my main laptop. While I did want to have a laptop running Linux to the side of the MBP running Mac OS X, I finally decided it’s pretty pointless for me.

There are multiple reasons for that, some have nothing to do with Fedora, but a few have. Marginally maybe, but they have. The first problem is, once again, the video card. While it’s not like it has been easy with Yamato’s new one I got to say that two and a half months later I’m definitely glad I got it: KMS with 2.6.32 (and GIT userland — need to check whether that’s still needed, but I guess so for a while still) works like a charm, I’m able to use compiz without a glitch, it’s perfectly stable. With the nVidia on-board card of that laptop, it’s a totally different story. The nvidia binary driver for that card is not (yet?) available for Fedora 12, and the nouveau driver is… useless. It’s not just a matter of lacking 3D acceleration, but it’s also totally broken for suspension, which worked fine at least with the proprietary driver instead.

But it goes beyond the hardware support; probably you have all heard about the thunderstorm around Fedora’s original decision to allow any user with console access to install new packages without root password. I actually think that for Fedora’s target, that’s a pretty good move: it limits itself to installing and upgrading signed packages which has thus limited security implications, and it’s just a default. For most users, having console access is as good as having root’s password so it shouldn’t really matter; for desktop usage, that’s pretty much true already. Smarter, more security-paranoid users can easily change that setting. At any rate, the thunderstorm (or crapfest if you prefer) got them so much they changed the default again; too bad. Unfortunately, it seems instead that I got a different problem: my PackageKit interface is totally broken and I cannot use it at all; I got to use yum to upgrade my box which is definitely not so nice.

At first I thought it had to be related to either the fact that I upgraded from F11 or to my use of RPM Fusion but turns out that the PackageKit interface is as much broken on a box that a customer of mine set up for me to install a toolchain chroot for them last week. I ended up using yum there as well; no clue what the problem is with that.

And since I upgraded to F12 I found another problem as well: I already ranted about the fact that I couldn’t get bluetooth dial-up to work with my Nokia phone, and I had to use the cable to work it out; following Adam’s suggestion I also got the JoikuSpot application that turns the phone into an (ad-hoc) hotspot to use it via WLAN without configuring anything. The latter approach is, unfortunately, valid only if you’ve got the power adapter of your phone at hand, since it lasts about an hour on my E75; and the other day (at my customer’s office) I didn’t have it available. I had, though, the cable, left in the bag since the last time I used it, unfortunately when I tried to connect with that, exactly like I did in F11, NetworkManager decided to fail. And of course neither DUN nor PAN seems to be available via bluetooth in F12 as well as F11.

So I’m considering whether I need that laptop or not: the MBP starts up in less than two seconds, thanks to the fact I always leave it in Suspend-to-RAM (and that’s faster than Google’s Chrome OS… I wonder why people seem to challenge the start-up time rather than fixing the suspension support, bah); the MBP lasts more than four hours on its battery; the MBP have a much sleeker design which makes it handier and I don’t have to go around with the clunky power supply (not only because the MBP’s is smaller, but also because I have my mom’s supply downstairs if I’m running low on battery); the MBP (with OSX at least) can connect properly, via bluetooth, to the phone and thus the Internet (most of the times at least). So at the end, I’m not going to use the Compaq for much.

I’ll create a Fedora 12 virtual machine on Yamato for testing my projects there, where most of the previous notes about stuff not working properly will be moot points.

*Post scriptum: I wrote the draft for this article a couple of days ago and in the mean time I set up the Fedora 12 virtual machine I noted in the last paragraph; it was that way, by trying out virtio, that I found the n-th qemu/kvm quirk that made me drop the “proper” qemu. Unfortunately with that new install, from scratch, not update, I found another share of problems.*

*The remote desktop support in GNOME is totally broken: I can see with tcpdump the request arriving, but no reply is given altogether. If you set an hostname in three parts (say, fedora12.qemu.local), Avahi will advertise fedora12.local instead. system-config-services is not installed by default, and the first time I installed it I had to reboot otherwise I would only get crashes. One default cron job causes SELinux to report invalid accesses to /var/lib … all in all, it seems to me like Fedora 11 was way more polished!*

Interesting notes about filesystems and UTF-8

You probably know already that I’m an UTF-8 enthusiast; I use UTF-8 extensively in everything I do, mail, writings, irc, and whatever; not only because my name can only be spelled right when UTF-8 is used, but also because it really makes it nicer to write text that has proper arrows rather than semigraphical arrows, and proper ellipsis as well as dashes.

On Linux, UTF-8 is not always easy to get right, there is quite a bit of software out there that does not play nice with UTF-8 and unicode, included our GLSA handling software, and that can really be a bother to me. There are also problems when interfacing to filesystems like FAT that don’t support UTF-8 in any way.

Not so on Mac OS X usually, because the system was there designed entirely to make use of Unicode and UTF-8, included the filesystem, HFS+. There is, though, one big problem with this: since there are many ways to produce the same character in UTF-8, using either single codepoints or more complex, but easier to compare in case-insensitive way, combined diacritics markers. Since HFS+ can be case-insensitive (and indeed it is by default, and has to be for the operating system volume), Apple decided to force the use of the latter format for UTF-8 text in HFS+: all the file names are normalised before being used. This works fine for them, and the filenames are usually readable from Linux just as fine.

But there is a problem. Since I have lots of music on iTunes to be synced on my iPod, I usually keep my main music archive in OS X, and then rsync it over repeatedly on Linux so I can play it with my main system (or at least try to since most of the audio players I found are sucky for what I need). In my music archive, I have many tracks from Hikaru Utada (宇多田ヒカル), which are named with the original titles (most of them come from the iTunes Store itself; others are ripped from my CD); one EP I have is titled SAKURAドロップス now in this title there are two characters that are decomposed in base and marker (ド and プ). While it might not be obvious, I’ll just rely on Michael Kaplan to explain you why that happens.

Now, the the synced file maintains the normalised filename, which is fine. The problem is that something does not work right on zsh, gnome-terminal, or both. On Gentoo, with local gnome-terminal, both when showing me the completion alternatives, and when actually completing the filename, instead of ド I get ト<3099> on Fedora via SSH, the completion alternatives are fine, while it still gets the non-recomposed version on the commandline after completion.

Update (2017-04-28): I feel very sad to have found out over a year and a half later that Michael died. The links in this and other posts to his blog are now linked to the archive kindly provided and set up by Jan Kučera. Thank you, Jan. And thank you, Michael.

Hardware signatures

If you read Planet Debian as well as this blog, you probably have noticed the number of Debian developers that changed their keys recently, after the shadows cast over the SHA-1 hash algorithm. It is debatable on whether this is an issue now or not, but that’s not what I want to discuss.

There are quite a few reasons why Debian developers are more interested in this than Gentoo developers; while we also sign manifests, there are quite a few things that don’t work that well in our security infrastructure, which we should probably pay more attention to (but I don’t want to digress now), so I don’t blame their consideration of tighter security.

I’m also considering the switch; while I have my key for quite a while, there are a few issues with it: it’s not signed by any Gentoo developer (I actually don’t think I have met anybody in person to be able to exchange documents and stuff), the Manifest signing key is not a subkey of my actual primary key (which by the way contains lots of data of my previous “personas” that don’t matter any longer), and so on so forth. Revoking this all and starting anew might be a good solution.

But, before proceeding, I want finally go get over with the thing and move to hardware cryptography if possible; I already expressed the interest before, but I never dug enough to find the important information, now I’m looking for that kind of information. And I want a solution that works in the broadest extension of cases:

  • I want it to work without SHA-1; I guess this starts already to be difficult; while it’s not clear whether SHA-1 is weak enough to be a vulnerability or not, being able to ignore the issue by using a different algorithm is certainly a desirable feature;
  • I want it to work with GnuPG and OpenSSH at least; if there is a way to get it to work with S/MIME it might also be a good idea;
  • I want it to work on both Linux and Mac OS X: I have two computers in my office: Yamato running Gentoo and Merrimac running OSX; I have to use both, and can’t do without either; I don’t care if I don’t have GnuPG working on OSX, I still need it to work with OpenSSH, since I would like to use it for remote access to my boxes;
  • as an extension to the previous point, I guess it has to be USB; not only I can switch it between the two systems (hopefully!), I’m also going to get a USB switch to use a single console between the two;

I guess the obvious solution would be a tabletop smartcard reader with one or more cards (and I could get my ID card to be a smartcard), but there is one extra point: one day I’m going to have a laptop again, what then? I was thinking about all-in-one tokens, but I have even less knowledge about those than I have about smartcards.

Can anybody suggest me a solution? I understand that the FSFE card only supports 1024 bit for the keys, which seems to be tied to weakness lately, no idea how much of that is true though, to be honest.

So, suggestions, very welcome!

Productivity improvement weekend

This weekend I’m going to try my best to improve my own productivity. Why do I say that? Well there are quite a few reasons for it. The first is that I spent the last week working full time on feng, rewriting the older code to replace it with simpler, more tested and especially well documented code. This is not an easy task especially because you often end up rewriting other parts to play nicely with the new parts; indeed to replace bufferpool, between me and Luca we rewrote almost entirely the networking code.

Then there is the fact that I finally got a possible price to replace the logic board of my MacBook Pro that broke a couple of weeks ago: €1K! That’s almost as much as a new laptop; sure not the same class, but still. In the mean time I bought an iMac; I needed access to QuickTime, even more than I knew before, because we currently don’t have a proper RTSP client; MPlayer does not support seeking, FFplay is broken for a few problems, and VLC also does not behave in a very standard compliant way. QuickTime is, instead, quite well mannered. But this means I have spent money to go on with the job, which is, well, not exactly the nicest thing you can do if you need to pay some older debts too.

So it means I have to work more; not only I have to continue my work on lscube at full time, but I’m going to have to get more jobs to the side; I got asked for a few projects already, but most seem to require me to learn new frameworks or even new programming languages, which means they require a quite big effort. I need the money so I’ll probably pick them but it’s far from optimal. I’ve also put on nearly-permanent hold the idea of writing an autotools guide, either as an open book or a real book; the former has shown no interest among readers of my blog, the latter has shown no interest among publisher. I start to feel like an endangered species regarding autotools, alas.

But since at least for lscube I need to have access to the FFmpeg mailing list, and I need access to the PulseAudio mailing list for another project and so on so forth, I need to solve one problem I already wrote about, purging GMail labels out of older messages. I really have a need for this to be solved, but I’m still not totally in luck. Thanks to identi.ca, I was able to get the name of a script that is designed to solve the same problem: imap-purge . Unfortunately there is a problem with one GMail quirk: deleting a message from a “folder” (actually a GMail label) does not delete the message from the server, it only detach the label from that message; to delete a message from the server you’ve got to move it to the Trash folder (and either empty it or wait for 30 days so that it gets deleted). I tried modifying imap-purge to do that, but my Perl is nearly non-existent and I couldn’t even grok the documentation of Mail-IMAPClient regarding the move function.

So this weekend either I find someone to patch imap-purge for me or I’ll have to write my own script based on its ideas in Ruby or something like that. Waste of time from one side, but should allow me to save time further on.

I also need to get synergy up to speed in Gentoo, there have been a few bugs opened regarding crashes and other problems and requests for startup scripts and SVN snapshots; I’ll do my best to work on that so that I can actually use a single keyboard and mouse pair between Yamato and the iMac (which I called, with a little pun, USS Merrimac (okay I’m a geek). Last time I tried this, I had sme problems with synergy deciding to map/unmap keys to compensate the keyboard difference between X11 and OSX; I hope I can get this solved this time because one thing I hate is having different key layout between the two.

I also have to find a decent way to have my documents available on both OS X and Linux at the same time, either by rsyncing them in the background or sharing them on NFS. It’s easier if I got them available everywhere at once.

The tinderbox is currently not running, because I wouldn’t have time to review the build logs, in the past eight days I turned on the PlayStation 3 exactly twice, one earlier today to try relaxing with Street Fighter IV (I wasn’t able to), and the other time just to try one thing about UPnP and HD content. I was barely able to watch last week’s Bill Maher episode, and not much more. I seriously lack the precious resource that time is. And this is after I show the thing called “real life” almost entirely out of the door.

I sincerely feel absolutely energy-deprived; I guess it’s also because I didn’t have my after-lunch coffee, but there are currently two salesman boring my mother with some vacuum cleaner downstairs and I’d rather not go meet them. Sigh. I wish life were easy, at least once an year.

International problems

I’m probably one quite strange person myself, that I knew, but I never thought that I would actually have so many problems when it comes to internationalisation, especially on Linux, but not limited to. I have written before that I have problems with my name (and a similar issue happened last week when the MacBook I ordered for my mom was sent by TNT to “Diego Petten?” ­– which wouldn’t then be found properly by the computer system when looking up the package by name), but lately I have been having even worse problems.

One of the first problem has happened while mailing patches with git on mailing list hosted by the Kernel.org servers; my messages were rejected because I used as sender “Diego E. ‘Flameeyes’ Pettenò”, without the double quotes around. For some RFC, when a period is present in the sender or destination names, the whole name has to be quoted in double quotes, but git does not seem to know about that and sends wrong email messages that get rejected. Even adding the escaped quotes in the configuration file didn’t help, so at the end I send my git email with my (new) full name “Diego Elio ‘Flameeyes’ Pettenò” even if it’s tremendously long and boring to read, and Lennart scolded me because now I figure with three different aliases in PulseAudio (on the other hand, ohloh handles that gracefully ).

Little parenthesis, if you’re curious where the “Elio” part comes from; I have legally changed my name, adding “Elio” as part of my first name last fall (it’s not a “second name” in the strict meaning of this term, because Italy does not have the concept of second name, it’s actually part of my first name). The reason for this is that there are four other “Diego Pettenò” in my city, two of which are around my age, and the Italian system is known for mistaking identities; while it does not really make me entirely safe to just add a second name, it should make it less likely that a mistake would happen. I have chosen Elio because that was the name of my grandfather.

So this was one of the problems; nothing really major, and was solved easily. The next problem happened today when I went for writing some notes about extending the guide (for which I still fail to find a publisher; unless I find one, it’ll keep the open donation approach), and, since the amount of blogging about the subject lately has been massive, I wanted to make sure I used the proper typographical quotation marks . It would have been easy to use them from OS X, but from Linux it seems it’s quite more difficult.

On OS X, I can reach the quotation marks on the keys “1” and “2”, adding the Option and Shift keys accordingly (single and double, open and closed); on Linux, with the US English, Alternate International keyboard I’m using, the thing is quite more difficult. The sequence would be something like Right Control, followed by AltGr and ' (or "), followed by < or >; even if I didn’t have to use AltGr to have the proper keys (without AltGr on the Alternate International keyboard the two symbols are “dead keys”, and are used for composing, quite important since I write both English and Italian with the same keyboard), it’s quite a clumsy way to access the two. And it also wouldn’t work with GNU Emacs on X11.

My first idea would have been to use xmodmap to just change the mappings of “1” and “2” to add third and shifted third levels, just like on OS X. Unfortunately adding extra levels with xmodmap seems to only work with the “mode switch” key rather than with the “ISO Level 3” key; the final result is that I had to “sacrifice” the right Command key (I use an Apple keyboard on Linux) to use as “mode switch” (keeping the right Option as Level 3 shift), and then mapping the 12 keys like I wanted. The result is usable but it also means that all the modifiers on the right side have completely different meaning from what they were designed to, and is not easy to remember all of them.

I thought about using the Keyboard Layout Editor but it requires antlr3 for Python, which is not available in Gentoo and seems to be difficult to update, so for now I’m stuck with this solution; next week when the iMac should arrive I’ll probably spend some more time on the issue (I already spent almost the whole afternoon, more than I should have used), I’d sincerely love to be able to set up the same exact keyboard layout for both systems, so I don’t have to remember in which one I am to get the combinations right; I already publish my custom OSX layout that basically implements the Xorg alternate international layout in OSX (you already have the same layout available in Windows as “US International”, so OSX was the only one lacking that), so I’ll probably just start maintaining layouts for both systems in the future.

And I don’t even want to start talking about setting up proper IME for Japanese under this configuration…