RTSP clients’ special hell

This week, in Orvieto, Italy, there was OOoCon 2009 and the lscube team (also known as “the rest of the feng developer beside me”) was there to handle the live audio/video steaming.

During the preparations, Luca called me one morning, complaining that the new RTSP parser in feng (which I wrote almost single handedly) refused to play nice with the VLC version shipped with Ubuntu 9.04: the problem was tracked down to be in the parser for the Range header, in particular in the normal play time value parsing: the RFC states that I’m expecting a decimal value with a dot (.) as the separator, but VLC is sending a comma (,) which my parser is refusing.

Given Luca actually woke me up while I was in bed, it was a strange presence of mind that let me ask him which language (locale) was the system set in: Italian. Telling him to try using the C locale was enough to get VLC to comply with the protocol. The problem here is that the separators for decimal places and thousands are locale-dependent characters; while most programming languages obviously limit themselves at supporting the dot, and a lot of software likewise use that no matter what the locale is (for instance right now I have Transmission open and the download/upload stats use the dot, even though my system is configured in Italian). Funny that this problem came up during an OpenOffice event, given that’s definitely one of the most known software that actually rely (and sometimes messes up) with that difference.

To be precise, though, the problem here is not with VLC by itself: the problem is with the live555 (badly named media-plugins/live in Gentoo) library, which provides the generic RTSP code for VLC (and MPlayer). If you ever wrote software that dealt with float to string conversion you probably know that the standard printf()-like interface does not respect locale settings; but live555 is a C++ library and it probably uses string streams.

At any rate, the bug was known and fixed already in live555, which is what Gentoo already have, and the contributed bundled libraries of VLC have (for the Windows and OS X builds), so those three VLC instances are just fine, but the problem is still present in both the Debian and Ubuntu versions of the package which are quite outdated (as xtophe confirmed). Since the RFC does not have any conflicting use of the comma in that particular place, given the extension of the broken package (Ubuntu 9.10 also have the same problem), we decided for working it around inside the feng parser, and accepting the comma-separated decimal value instead.

From this situation, I also ended up comparing the various RTSP clients that we are trying to work with, and the results are quite mixed, which is somewhat worrisome to me:

  • latest VLC builds for proprietary operating systems work fine (Windows and OS X);
  • VLC as compiled in Gentoo also work fine, thanks Alexis!
  • VLC as packaged for Debian (and Ubuntu) uses a very old live555 library; the problem described here is now worked around, but I’m pretty sure it’s not the only one that we’re going to hit in the future, so it’s not a good thing that the Debian live555 packaging is so old;
  • VLC as packaged in Fedora fails in many different ways: it goes in a loop for about 15 minutes saying that it cannot identify the host’s IP address, then it finally seem to be able to get a clue, so it’s able to request the connection but… it starts dropping frames, saying that it cannot decode and stuff like that (I’m connected over gigabit lan);
  • Apple’s QuickTime X is somewhat strange; on Merrimac, since I used it to test the HTTP tunnel implementation it now only tries connecting to feng via HTTP rather than using RTSP; this works fine with the branch that implements it but fails badly in master obviously (and it doesn’t look like QuickTime gets the hint of changing to RTSP protocol); on the other hand it works fine on the laptop (that has never used the tunnel in the first place), where it uses RTSP properly;
  • again Apple’s QuickTime, this time on Windows, seems to be working fine.

I’m probably going to have to check the VLC/live packaging of other distributions to see how many workaround for broken stuff we might have to look out for. Which means more and more virtual machines, I’ll probably have to get one more hard drive by this pace (or I could probably replace one 320G drive with a 500G drive that I still have at home…). And I should try totem as well.

Definitely, RTSP clients are a hell of a thing to test.

Apple’s HTTP tunnel, and new HTTP streaming

Finally, last night, I’ve been able to finish, at least in a side-branch, to support Apple’s RTSP-in-HTTP tunnelling support, as dictated by their specifications. Now that the implementation is complete (and really didn’t take that much work to support once the parser worked as needed), I can tell a few things about that specification and about Apple phasing it out in favour of a different, HTTP-only streaming system.

First of all the idea of supporting both the RTSP and the RTSP-in-HTTP protocol, while working with the same exact streaming logic behind the scenes, requires a much more flexible parser, which isn’t as easy because of the HTTP design which I already discussed. While of course, once the work is done, it’s done, the complexity of such a parser isn’t ignorable.

But, since the work was done in quite a short time for me, it’s really not that bad, if the technique worked as good as it’s supposed to. Unfortunately, that’s not the case. For instance, the default configuration of net-proxy/polipo (a French HTTP proxy), does not allow for the technique to work, because of the way this is designed to work: pipelining and re-use of the connection, which are very common things to do with proxies to try improving performance, usually wait for the server to complete a request before they are returned to the client; unfortunately the GET request that is made by the client is one that will never complete, as it is where the actual streaming will happen.

At the end, for testing, I found it definitely easier to use the good old squid for testing purposes, even though the documentation at one (very hidden) point explains which parameters to set to make it work with QuickTime. But it definitely mean that not all HTTP proxy will let this technique work correctly.

And it’s definitely not the only reason. Since the HTTP and RTSP protocols are pretty similar, even the documentation says that if it POSTed the RTSP requests directly, it would have been seen as a bad HTTP requet by the proxy; to avoid that the requests are sent base64-encoded (which means, bigger than the original). But while the data coming from the client is usually scrutinised more, proxies nowadays probably scrutinise the responses as well as the requests, to make sure that they are not dealing with a malicious server (phising or stuff like that); and if they do, they are very likely to find the response coming from the GET request quite suspicious, likely considering it a tentative to HTTP response splitting (which is a common webapp vulnerability).

Now, of course it would have been possible for Apple to simply upgrade the trick by encoding the response as well as the request, but that has one huge drawback: it would both increase the latency of the stream (because the base64 content would have to be decoded before it’s used) and at the same time it would increase the size of the response, by ⅓, one third, due to that kind of encoding). Another alternative would have been to simply encode with base64 the pure RTSP responses, and keep unencoded the RTP streams (which are veicolated over interleaved RTSP). Unfortunately this would have required more work, since at that point, the GET body wouldn’t be simply be stream-compatible with a pure RTSP stream , and thus wouldn’t be very transparent for either the client nor the server.

On the other hand, the idea of implementing that as an extension hasn’t entirely disappeared in my mind; since the channels one and following are used by the RTP streams, the channel code zero is still unused, and would make it possible to simply use that to send the RTSP response encoded in base64. At least in feng this wouldn’t require huge changes to the code, since we already consider a special channel zero for the SCTP connection.

With all these details considered, I can understand why Apple was looking into alternatives. What I cannot understand is, still, what they decided to use as alternative, since the new HTTP Live Streaming protocol still looks tremendously hacky to me. Hopefully, our next step is rather going to be Adobe’s take at a streaming protocol .

HTTP-like protocols have one huge defect

So you might or might not remember that my main paid job in the past months (and right now as well) has been working on feng, the RTSP server component of the lscube stack .

The RTSP protocol is based off HTTP, and indeed uses the same message format as defined by the RFC822 text (the same used for email messages), and a request line “compatible” with HTTP.

Now, it’s interesting to know that this similitude between the two has been used, among other things, by Apple to implement the so-called HTTP tunnelling (see the QuickTime Streaming Server manual Chapter 1 Concepts, section Tunneling RTSP and RTP Over HTTP for the full description of that procedure). This feature allows clients behind standard HTTP proxies to access the stream, creating a virtual full-duplex communication between the two. Pretty neat stuff, even though Apple recently superseded it with the pure HTTP streaming that is implemented in QuickTime X.

For LScube we want to implement at a very least this feature, both server and client side, so that we can get on par with the QuickTime features (implementing the new HTTP-based streaming is part of the long haul TODO, but that’s beside the point now). To do that, our parser has to be able to accept the HTTP request and deal with them appropriately. For this reason, I’ve been working to replace the RTSP-specific parser to a more generic parser that accepts both HTTP and RTSP. Unfortunately, this turned out not to be a very easy task.

The main problem is that what we wanted to do was to do the least passes over the request line to get the data out; when we only supported RTSP/1.0 this was trivial: we knew exactly which method were supported, which ones appeared valid but weren’t supported (like RECORD) and which ones were simply invalid to begin with, so we set the value for the method passing by and then moved on to check the protocol. If the protocol was not valid, we cared not about the method anyway, but at worse we had to pass through a series of states for no good reason, but that wasn’t especially bad.

With the introduction of a simultaneous HTTP parser, the situation became much more complex: the methods are parsed right away, but the two protocols have different methods: the GET method that is supported for HTTP is a valid but not supported method for RTSP, and vice-versa when it comes to the PLAY method. The actions that handled the result of parsing of the method for the two protocols ended up executing simultaneously, if we were to use a simple union of state machines, and that, quite obviously, couldn’t have been the right thing to do.

Now, it’s really simple to understand that what we needed was a way to discern which protocol we’re trying to parse first, and then proceed to parse the rest of the line as needed. But this is exactly what I think is the main issue with the HTTP protocol and all the protocols, like RTSP, or WebDAV, that derive, or extend, it: the protocol specification is at the end of the request line. Since you usually parse a line in the latin order of characters (from left to right), you read the method before you know which protocol the client is speaking. This is easily solved by backtracking parsers (I guess LALR parsers is the correct definition, but parsers aren’t my field of work, usually, so I might be mistaken), since they first pass through the text to parse to identify which syntax to apply, and then they apply the syntax; Ragel is not such a parser, while kelbt (by the same author) is.

Time constrain and the fact that kelbt is even more sparingly documented than Ragel mean that I won’t be trying to use kelbt just yet, and for now I settled at trying to find an overcomplex and nearly unmaintainable workaround to have something working (since the parsing is going to be a black-box function, the implementation can easily change in the future when I learn some decent way to do that).

This all thing would have been definitely simpler if the protocol specification was at the start of the line! At that point we could just have decided the parsing further down the line depending on the protocol.

At this point I’m definitely not surprised that Adobe didn’t use RTSP and instead invented their own Real-Time Message Protocol not based on HTTP but is rather a binary protocol (which should also make it much easier to parse, to an extent).

Cellphones… sigh!

There is a lot of talk about the Linux-based cellphones out there, I guess lately mostly due to Nokia’s release of the N900; I sincerely am sticking still with the Nokia E75, after switching last year to the E71 (well, it’s not my fault if 3 is giving me chance to switch phone paying it 14 of what it’s worth on the market…), but I start to wonder if it was a good idea.

Don’t get me wrong, the phone is good, as mostly is the software on it; unfortunately there are quite a few problems related to it, although I really don’t know how better/worse other systems can be:

  • While most of the software in the phone let me choose the “Internet” aggregated connection as default connection (something very good Nokia added with this release of their S60 firmware), the mail client doesn’t… that means that it continues asking me which connection to use when it has to check the mailbox. Yes, I could tell it to use the direct connection, but then it would try to use it even when I were outside of the standard 3 network coverage, and that’s definitely bad. Plus I prefer to use WiFi if I have it available.
  • Again the mail client: it doesn’t tell me whether there are subfolders with unread messages, I have to check them all by myself, which is quite boring when you want simply to see if you got mail.
  • The browser is a bit puny sometimes; yes it works most of the times, but there are a few things that do bother me tremendously, one of which is the fact that, while it remembers passwords set in forms, it doesn’t remember HTTP digest auth passwords! Which is what I’m using, ça va sans dire.
  • The Contacts on Ovi application (an XMPP client) is definitely strange; even though I have the latest version, sometimes it goes crazy with the contacts, and there are people who I used to have as contacts in there that I cannot find any longer; the fact that they don’t allow to just use any XMPP account, but just Ovi accounts, doesn’t really help.
  • Non-latin characters cannot be displayed; not only Japanese text (for track names of Japanese music for instance), but also little things like the dashes (—), typographical quotes (“”) and arrows (→) cannot be displayed, neither in the webpages nor in the mail messages. This is pretty upsetting to me since I ♥ Unicode.
  • And most importantly, writing applications for Symbian is nigh impossible, at least without using Windows, since I don’t see anything changed since then. And since I’m a developer, sometimes I’d wish I’d be able to just write my own applications for the stuff I need.

Now I guess I’ll have to start considering some ideas on what I’ll go with next time. The choices are most likely iPhone, Android and Nokia’s N900; neither look really short-term to me because they all involve pretty expensive phones — I didn’t pay more than €120 for my current phone. But before I can even think about a decision, I need some further information and I’m not really keen on going on to find it right now because I can barely find the time to write this while I wait for two compiling processes to complete, since I’m fully swamped with work, so I’m writing them here and maybe some of you can help me with them…

Are they able to switch between 3G and WiFi connectivity as needed? Can they blacklist 3G while roaming, and then whitelist a specific network? (This is because when I’m under another 3 network, outside of Italy, the Nokia detects roaming, but the same local tariffs apply so it really should feel like home network for the phone as well).

I know that the iPhone does, but what about the other two? Do they support IMAP with IDLE command? Since GMail implements it I expect at least Android to…

Do the other browsers remember authentication information?

Do they have a IM client compatible with Jabber/GTalk? I guess Android does, I hope so at least. I would prefer for a native client, not something that connects to a middleware server like Fring does.

Can they display Unicode characters, which include Unicode punctuation and Japanese text? I’m told the iPhone does…

Can they sync with something, and I mean that with keeping as much information as possible about a person; I have a very complete Address Book on OSX right now; I haven’t imported it in Evolution in quite a while, I should find a way; neither Ovi Sync, Google Sync and Yahoo! Sync seem to work fine with the amount of details I keep around; Google is probably the worst on that account though. Being able to sync with Evolution directly is definitely a good thing.

How possible is it to write applications for them? I have read very bad things about the Palm Pre; I know that the iPhone has a complete SDK (which I should also have installed already but never used) but it only works on OSX; I do have that system but I would rather work from Linux, so I’m curious about the support for the other two. There’s an Android SDK for Linux but I have no clue how it works. Important detail here: I have no intention whatsoever to crack (“jailbreak”) the device; if I buy something I want that to work as good as possible without having to fiddle with it; if I have to fiddle, then I might as well go with something else, which is probably my main reason against getting an iPhone.

Bonus points if I can write open source applications for the device, since that’s what I’d very much like to do; I’d rather write an open source (free software) application and eventually “sell” it for a token amount on the store for the easiness of installation than write a closed source application and keeping it gratis.

Among other features I’d be needing there are support for Voice over IP (standard SIP protocol) over-the-air (that is, over 3G network as well) and the ability to deal with QR Codes. More bonus points if there is a way to access QR Codes decoding from custom applications (since that would allow me to refine my system tagging to a quite interestingly sophisticated point.

More: having a software able to reject calls from a blacklist of numbers (including calls without a caller ID) would also be appreciated, since I haven’t stopped it since that call (and I keep updating it with numbers of nuisances as needed). Even more bonus points if there is also an SMS antispam that can kill the promotional messages that 3 sends me (they get old pretty soon, especially considering I’m using a “business” account).

Now, all the functions might as well be handled by external apps not part of the firmware, that’s actually even better since there’s a better chance that they’d be updated rather than the firmware. But obviously if I have to spend another €150 just to get the software I need I might simply decide for another family.

At any rate, if you can help me with the future choice, I’d be definitely glad. Thanks!

After some time with Snow Leopard

You probably know that, as much as I am a Linux user, I’m an OS X user as well. I don’t usually develop for OS X, but I do use it quite a bit, even though my laptop broke last March, I bought an iMac to replace it (and now I also have my MacBook Pro back, although with the optical unit not working still; I’m now tempted to get a second harddrive instead of an optical unit, I can use the iMac’s DVD sharing for that instead).

And since I’m both a developer and an user, when the new release of OS X, Snow Leopard, was finally published, I ordered it right away. Two weeks into using the new version of OS X, I have some comments to say. And I’m going to say them here because this is something that various Free Software projects should probably learn from too.

The first point is nothing new, Apple already said that Snow Leopard is nothing totally new, but it’s rather a polished version of Leopard… with 64-bit under the hood. The 64-bit idea is not new to Linux and a lot of distributions already support it, and when it’s available, almost all system software uses it; there are still a few proprietary pieces that are not ported to 64-bit, especially for what concern games, and software like Skype, but most of the stuff is good for us so we really have nothing new to learn from OS X in that field.

I was expecting the Grand Central Dispatch thing to be a rebranded OpenMP sincerely (by the way, thanks to whoever sent me the book, it hasn’t arrived yet but I’ll make sure to put it to good use in Free Software once I have it!), instead it seems to be a totally different implementation, which Apple partly made available as open source free software (see this LWN article ); I’m not sure if I’m happy about it or not, given it’s already another implementation of an already-present idea. On the other hand, it’s certainly an area were Free Software could really learn; I don’t think OpenMP is that much used outside of CPU-intensive tasks, but as the Apple guys shown in their WWDC presentation, it’s something that even something like a mail client could make good use of.

I still have no idea what technique QuickTime X is using for the HTTP-based streaming, I’ll find out one day though, for now I’m still working on the new implementation of the lscube RTSP parser that should also support the already-present HTTP proxy passthrough; if it uses the same technique, that’s even better!

In the list of less-advertised changes, there are also things like better Google support in the iCal and Address Book: now for instance you can edit the Google calendars from inside the iCal application, which is kinda cool (all the changes are automatically available both locally and on Google Calendar itself), and you can sync your Address Book with Google Contacts. The former is something that supposedly should work with Evolution as well, although I think they really really have a long way to go before it works as well, and that’s not to say that iCal integration works perfectly… at all!

The latter instead is a bit strange, I already had the impression that Google Contacts is some very bad shit (it doesn’t store all information, the web interface is nearly unusable, and so on), but when I decided to enable the “Sync with Google” option in Address Book I probably made a big mistake: first the thing created lots of duplicates in my book, since I uploaded a copy of all them with the cellphone some time ago, and some entries were seen as duplicated rather than being the same thing (mostly for people with an associated honorific like “Dott.” for my doctors).; this is quite strange because the vCard files should have an Unique ID just for that reason, to make sure that they are not duplicated if moved between different services. In addition, the phone numbers went messed up since they added up (in Apple’s Address Book I keep them well edited – +39 041 09 80 841 – the Nokia removes the spaces, and it seems like Google Contacts sometimes drops the country code for no good reason at all).

Interestingly enough, though, while Leopard was known for the Mobile Me support, Snow Leopard adds quite a few more options for syncing data, probably because Mobile Me itself wasn’t really that much of a good deal for most people; it still didn’t support my Nokia E75 natively (but “my” plugin worked — a copy of the E71 plugin by Nokia with the phone name edited), and it doesn’t seem to support a generic SyncML provider (like Nokia’s Ovi service), but there is for instance a new “CardDAV” entry in the Address Book for instance; I wonder if it’s compatible with Evolution’s CalDAV-based address book), if so I might want to use that, I guess.

While the Apple showcase of Snow Leopard was aimed at criticising Microsoft’s release of Windows Vista with all the related changes in the interface, I wouldn’t be surprised if, when deciding how to proceed with the new version, they also counted in the critiques against KDE 4’s release. I hope that Gnome 3 won’t be anything like that, and would rather follow Apple’s approach of subtle, gentle changes, although I won’t count on it.

At any rate, the experience up to now was quite nice, nothing broke heavily, even Parallels Desktop worked fine after the update, which was actually surprising to me since I expected the kernel-level stuff to break a part with the update. I wish Linux would be as stable sometimes. But bottom-line, although with a few problems I still love Free Software better.

Mixing free software and proprietaryware

You probably know already, if you follow my blog, that I have some quite pragmatic views when it comes to software, and while I despise proprietary stuff, I also do use quite a bit of proprietary software and, most importantly, I pay for that.

For good or for bad, mot of my paid work also involves working in proprietary software, may it be supporting QuickTime RTSP extensions in feng or developing software that runs on Windows (and OSX and Linux). For this reason, as I said before I also use Mono since that allows me to reduce the amount of proprietary software I have to deal with.

But because working on proprietary software, for somebody used to the sharing and improving of free software, is quite difficult, I also apply one extra rule: when the customer wants a closed-source proprietary software for what concerns the core business logic, I try to write asmuch code as possible generic, so that it can be split in LGPL-licensed libraries. This way I can release part of the code I write as free software without going against my customers’ requests, and not costing them anything more.

And thanks to the fact that there already are LGPL-licensed libraries to do some of the work out there, this also simplifies my life. Well, at least when they work and I don’t need to spend a lot of time to make them work. Unfortunately this is the case sometimes, especially when I have to package for Linux something that was probably never tested or intended to be used on Linux. So I wish to thank Jo Shields for helping me out the other night about packaging libraries that don’t provide, by themselves, a strongname.

So, at the end, I still think there is space for different license in different contexts; especially, while LGPL is a compromise from pure free software philosophies, it often allows you to free code that wouldn’t be freed when given a single choice (between GPL and proprietary).

On the other hand, I have to rant a bit about the price of proprietaryware in Italy at least. For work I needed a license of Microsoft Office 2007 Professional (don’t ask, it’s a long story). In Italy, the price was €622 plus VAT; on Amazon UK, the same product (I don’t care about language, but the code seems to work fine with multi-language Office by the way) was up for an equivalent €314 plus VAT (in the former case, VAT needs to go through the tax system, in the latter, it’s directly reimbursed by Amazon, so it’s also faster to deal with). Now I’m curious to see if the same will hold true for Windows 7 licenses (yes I’m afraid I’m going to have to deal with that as well for my jobs) in the next months. Kudos to Apple at least, the update to Snow Leopard was pretty cheap, was sent right away (thanks to my passing through the business store), and really doesn’t seem to break anything on my systems at least.

But still, I love working on Free Software, at least there, I can fix the stuff that fails myself, or at least prod somebody to, most of the times!

Interesting notes about filesystems and UTF-8

You probably know already that I’m an UTF-8 enthusiast; I use UTF-8 extensively in everything I do, mail, writings, irc, and whatever; not only because my name can only be spelled right when UTF-8 is used, but also because it really makes it nicer to write text that has proper arrows rather than semigraphical arrows, and proper ellipsis as well as dashes.

On Linux, UTF-8 is not always easy to get right, there is quite a bit of software out there that does not play nice with UTF-8 and unicode, included our GLSA handling software, and that can really be a bother to me. There are also problems when interfacing to filesystems like FAT that don’t support UTF-8 in any way.

Not so on Mac OS X usually, because the system was there designed entirely to make use of Unicode and UTF-8, included the filesystem, HFS+. There is, though, one big problem with this: since there are many ways to produce the same character in UTF-8, using either single codepoints or more complex, but easier to compare in case-insensitive way, combined diacritics markers. Since HFS+ can be case-insensitive (and indeed it is by default, and has to be for the operating system volume), Apple decided to force the use of the latter format for UTF-8 text in HFS+: all the file names are normalised before being used. This works fine for them, and the filenames are usually readable from Linux just as fine.

But there is a problem. Since I have lots of music on iTunes to be synced on my iPod, I usually keep my main music archive in OS X, and then rsync it over repeatedly on Linux so I can play it with my main system (or at least try to since most of the audio players I found are sucky for what I need). In my music archive, I have many tracks from Hikaru Utada (宇多田ヒカル), which are named with the original titles (most of them come from the iTunes Store itself; others are ripped from my CD); one EP I have is titled SAKURAドロップス now in this title there are two characters that are decomposed in base and marker (ド and プ). While it might not be obvious, I’ll just rely on Michael Kaplan to explain you why that happens.

Now, the the synced file maintains the normalised filename, which is fine. The problem is that something does not work right on zsh, gnome-terminal, or both. On Gentoo, with local gnome-terminal, both when showing me the completion alternatives, and when actually completing the filename, instead of ド I get ト<3099> on Fedora via SSH, the completion alternatives are fine, while it still gets the non-recomposed version on the commandline after completion.

Update (2017-04-28): I feel very sad to have found out over a year and a half later that Michael died. The links in this and other posts to his blog are now linked to the archive kindly provided and set up by Jan Kučera. Thank you, Jan. And thank you, Michael.

Productivity improvement weekend

This weekend I’m going to try my best to improve my own productivity. Why do I say that? Well there are quite a few reasons for it. The first is that I spent the last week working full time on feng, rewriting the older code to replace it with simpler, more tested and especially well documented code. This is not an easy task especially because you often end up rewriting other parts to play nicely with the new parts; indeed to replace bufferpool, between me and Luca we rewrote almost entirely the networking code.

Then there is the fact that I finally got a possible price to replace the logic board of my MacBook Pro that broke a couple of weeks ago: €1K! That’s almost as much as a new laptop; sure not the same class, but still. In the mean time I bought an iMac; I needed access to QuickTime, even more than I knew before, because we currently don’t have a proper RTSP client; MPlayer does not support seeking, FFplay is broken for a few problems, and VLC also does not behave in a very standard compliant way. QuickTime is, instead, quite well mannered. But this means I have spent money to go on with the job, which is, well, not exactly the nicest thing you can do if you need to pay some older debts too.

So it means I have to work more; not only I have to continue my work on lscube at full time, but I’m going to have to get more jobs to the side; I got asked for a few projects already, but most seem to require me to learn new frameworks or even new programming languages, which means they require a quite big effort. I need the money so I’ll probably pick them but it’s far from optimal. I’ve also put on nearly-permanent hold the idea of writing an autotools guide, either as an open book or a real book; the former has shown no interest among readers of my blog, the latter has shown no interest among publisher. I start to feel like an endangered species regarding autotools, alas.

But since at least for lscube I need to have access to the FFmpeg mailing list, and I need access to the PulseAudio mailing list for another project and so on so forth, I need to solve one problem I already wrote about, purging GMail labels out of older messages. I really have a need for this to be solved, but I’m still not totally in luck. Thanks to identi.ca, I was able to get the name of a script that is designed to solve the same problem: imap-purge . Unfortunately there is a problem with one GMail quirk: deleting a message from a “folder” (actually a GMail label) does not delete the message from the server, it only detach the label from that message; to delete a message from the server you’ve got to move it to the Trash folder (and either empty it or wait for 30 days so that it gets deleted). I tried modifying imap-purge to do that, but my Perl is nearly non-existent and I couldn’t even grok the documentation of Mail-IMAPClient regarding the move function.

So this weekend either I find someone to patch imap-purge for me or I’ll have to write my own script based on its ideas in Ruby or something like that. Waste of time from one side, but should allow me to save time further on.

I also need to get synergy up to speed in Gentoo, there have been a few bugs opened regarding crashes and other problems and requests for startup scripts and SVN snapshots; I’ll do my best to work on that so that I can actually use a single keyboard and mouse pair between Yamato and the iMac (which I called, with a little pun, USS Merrimac (okay I’m a geek). Last time I tried this, I had sme problems with synergy deciding to map/unmap keys to compensate the keyboard difference between X11 and OSX; I hope I can get this solved this time because one thing I hate is having different key layout between the two.

I also have to find a decent way to have my documents available on both OS X and Linux at the same time, either by rsyncing them in the background or sharing them on NFS. It’s easier if I got them available everywhere at once.

The tinderbox is currently not running, because I wouldn’t have time to review the build logs, in the past eight days I turned on the PlayStation 3 exactly twice, one earlier today to try relaxing with Street Fighter IV (I wasn’t able to), and the other time just to try one thing about UPnP and HD content. I was barely able to watch last week’s Bill Maher episode, and not much more. I seriously lack the precious resource that time is. And this is after I show the thing called “real life” almost entirely out of the door.

I sincerely feel absolutely energy-deprived; I guess it’s also because I didn’t have my after-lunch coffee, but there are currently two salesman boring my mother with some vacuum cleaner downstairs and I’d rather not go meet them. Sigh. I wish life were easy, at least once an year.

International problems

I’m probably one quite strange person myself, that I knew, but I never thought that I would actually have so many problems when it comes to internationalisation, especially on Linux, but not limited to. I have written before that I have problems with my name (and a similar issue happened last week when the MacBook I ordered for my mom was sent by TNT to “Diego Petten?” ­– which wouldn’t then be found properly by the computer system when looking up the package by name), but lately I have been having even worse problems.

One of the first problem has happened while mailing patches with git on mailing list hosted by the Kernel.org servers; my messages were rejected because I used as sender “Diego E. ‘Flameeyes’ Pettenò”, without the double quotes around. For some RFC, when a period is present in the sender or destination names, the whole name has to be quoted in double quotes, but git does not seem to know about that and sends wrong email messages that get rejected. Even adding the escaped quotes in the configuration file didn’t help, so at the end I send my git email with my (new) full name “Diego Elio ‘Flameeyes’ Pettenò” even if it’s tremendously long and boring to read, and Lennart scolded me because now I figure with three different aliases in PulseAudio (on the other hand, ohloh handles that gracefully ).

Little parenthesis, if you’re curious where the “Elio” part comes from; I have legally changed my name, adding “Elio” as part of my first name last fall (it’s not a “second name” in the strict meaning of this term, because Italy does not have the concept of second name, it’s actually part of my first name). The reason for this is that there are four other “Diego Pettenò” in my city, two of which are around my age, and the Italian system is known for mistaking identities; while it does not really make me entirely safe to just add a second name, it should make it less likely that a mistake would happen. I have chosen Elio because that was the name of my grandfather.

So this was one of the problems; nothing really major, and was solved easily. The next problem happened today when I went for writing some notes about extending the guide (for which I still fail to find a publisher; unless I find one, it’ll keep the open donation approach), and, since the amount of blogging about the subject lately has been massive, I wanted to make sure I used the proper typographical quotation marks . It would have been easy to use them from OS X, but from Linux it seems it’s quite more difficult.

On OS X, I can reach the quotation marks on the keys “1” and “2”, adding the Option and Shift keys accordingly (single and double, open and closed); on Linux, with the US English, Alternate International keyboard I’m using, the thing is quite more difficult. The sequence would be something like Right Control, followed by AltGr and ' (or "), followed by < or >; even if I didn’t have to use AltGr to have the proper keys (without AltGr on the Alternate International keyboard the two symbols are “dead keys”, and are used for composing, quite important since I write both English and Italian with the same keyboard), it’s quite a clumsy way to access the two. And it also wouldn’t work with GNU Emacs on X11.

My first idea would have been to use xmodmap to just change the mappings of “1” and “2” to add third and shifted third levels, just like on OS X. Unfortunately adding extra levels with xmodmap seems to only work with the “mode switch” key rather than with the “ISO Level 3” key; the final result is that I had to “sacrifice” the right Command key (I use an Apple keyboard on Linux) to use as “mode switch” (keeping the right Option as Level 3 shift), and then mapping the 12 keys like I wanted. The result is usable but it also means that all the modifiers on the right side have completely different meaning from what they were designed to, and is not easy to remember all of them.

I thought about using the Keyboard Layout Editor but it requires antlr3 for Python, which is not available in Gentoo and seems to be difficult to update, so for now I’m stuck with this solution; next week when the iMac should arrive I’ll probably spend some more time on the issue (I already spent almost the whole afternoon, more than I should have used), I’d sincerely love to be able to set up the same exact keyboard layout for both systems, so I don’t have to remember in which one I am to get the combinations right; I already publish my custom OSX layout that basically implements the Xorg alternate international layout in OSX (you already have the same layout available in Windows as “US International”, so OSX was the only one lacking that), so I’ll probably just start maintaining layouts for both systems in the future.

And I don’t even want to start talking about setting up proper IME for Japanese under this configuration…

Hardware induced break

I probably won’t be around for any project for a while. The reason for this is that a series of repeated hardware failures are wreaking my nerves and I came to the point my health is at stake again, since I was already worried enough about that.

Two weeks ago my mother’s iBook started to misbehave (Safari crashing out of the blue and stuff like that), so I decided to replace it at the first chance; I ordered for her a MacBook 13” (white) and went on with life. A few days later, my MacBook Pro failed to find the hard drive, logic board has to be replaced, but the Apple tech support in Padova is quite stupid (more to that in the near future, just so that I can put some bad advertising for a very bad support centre).

Since I needed access to QuickTime for a job task I ordered quickly an iMac, with my extraordinary hardware fund (already depleted by the harddisks bought last month); I wasn’t happy about the thing but job is job and I needed the box.

Today the last straw, after yet another blackout (and I don’t live in California), my iomega drive failed again, the first time was in November, I hard the drives in RAID0, I lost tons of data, but nothing really important, I had to re-rip some of my music and that was it. This time it was in RAID1, but still it’s a HUGE problem because it is the only working backup of the MacBook Pro, and I wanted to use that to restore the iMac with my MacBook Pro preferences.

Now I’m copying over my music on Yamato’s WD drive, and hopefully I can go from there to export the timemachine data in some way that can be actually used by th iMac install process.

But I’m not happy, not happy at all.