Diabetes control and its tech; should I build a Chrome app?

Now that I can download the data the data from three different glucometers, of two different models with my glucometer utilities I’ve been pondering on ways to implement a print out or display mode. While the text output is good if you’re just looking into how your different readings can be, when you have more than one meter it gets difficult to follow it across multiple outputs, and it’s not useful for your endocrinologist to read.

I have a working set of changes that add support for a sqlite3 backend for the tool, so you can download from multiple meters and then display them as belonging to a single series — which works fine if you are one person with multiple meters, rather than multiple users each with their glucometer. On the other hand, this made me wonder if I’m spending time working in the right direction.

Even just adding support for storing the data locally had me looking into adding two more dependencies (SQLite3 support, which yes, comes with Python, but needs to be enabled), and pyxdg (with a fallback) to make sure to put the data in the right folder, to simplify backups. Having a print-out or an UI that can display the data over time would add even more dependencies, meaning that the tool would be only really useful if packaged by distributions, or if binaries were to be distributed. While this would still give you a better tool on non-Windows OSes, compared to having no tool at all if left to LifeScan (the only manufacturer currently supported), it still is limiting.

In a previous blog post I was musing the possibility of creating an Android app that implements these protocols. This would mean already reimplementing them from scratch, as running Python on Android is difficult, so rewriting the code in a different language, such as Java or C#, would be needed — indeed, that’s why I jumped on the opportunity to review PacktPub’s Xamarin Mobile Application Development for Android, which will be posted here soon; before you fret, no I don’t think that using Xamarin for this would work, but it was still an instructive read.

But after discussing with some colleagues, I had an idea that is probably going to give me more headaches than writing an app for Android, and at the same time be much more useful. Chrome has a serial port API – in JavaScript, of course – which can be used by app developers. I don’t really look forward to implement the UltraEasy/UltraMini protocol in JavaScript (the protocol is binary structs-based, while the Ultra2 should actually be easy as it’s almost entirely 7-bit safe), but admittedly that solves a number of problems: storage, UI, print out, portability, ease-of-use.

Downloading that data to a device such as the Chromebook HP11 would also be a terrific way to make sure you can show the data to your doctor — probably more so than on a tablet, definitely more than on a phone. And I know for a fact that ChromeOS supports PL2303 adapters (the chipset used by the LifeScan cable that I’ve been given). The only problem with such an idea is that I’m not sure how HTML5 offline storage is synced with the Google account, if at all — if I am to publish the Chrome app, I wouldn’t want to have to deal with HIPPA.

Anyway, for now I’m just throwing the idea around, if somebody wants to start before me, I’ll be happy to help!

Browser fingerprinting

I’ve posted some notes about browser fingerprinting back in March, and noted how easy it is to identify a given user across requests just by the few passive scans that are possible without even having to have Flash enabled. Indeed, EFF’s Panopticlick considers my browser unique even with Flash disabled.

But even if Panopticlick is only counting it among the people who actually ran it, which means it’s just a percentage of all the possible users out there, it is also not exercising the full force of fingerprinting. In particular it does not try to detect the installed Chrome extensions, which is actually trivial to do in JavaScript for some of these extensions. In particular in my case I can easily identify the presence of the Readabily extension because it injects an “indicator” as an iframe with a fixed ID. Similarly it’s relatively easy to identify adblock users, as you probably have noticed in a bunch of different sites already that beg you to disable the adblocker so that they can make some money with the ads.

Given how paranoid some of my readers are, I’m looking forward for somebody to add Chrome and Firefox extensions identification to Panopticlick, it’ll be definitely interesting going forward.

Serving WebP images

A few months ago I tried experimenting with WebP so to reduce the traffic on my blog without losing quality. At the end the results were negative and I decided instead to drop the backgrounds on my blog and replace them with some CSS to provide gradients, which was not a lossless change, but was definitely easier to load.

After the VideoLan Dev Days 2013 (of which I have to write some report soon, I went to speak with Pascal again, and he was telling me that the new version of Chrome finally fixed the HTTP accept header, so that finally it will prefer WebP to other image formats if present. I confirmed this, as Chrome 30 reports Accept: image/webp,*/*;q=0.8. The q= parameter is not needed for Apache actually, but it’s a good idea to have it there anyway.

Thanks to this change, and mod_negotiation’s MultiViews it’s possible to auto-select the format, between JPEG, PNG and WebP, for Chrome users. Indeed if you’re visiting my blog with Chrome 30 (not sure about 29), you’re going to be served mostly WebP images (the CC license logos are still provided as PNG because the lossless compression was worse, and the lossy one was not saving enough bytes to be worth it).

I started working on enabling this while waiting at Terminal 1 at CDG airport (this is the last time I’m flying Aer Lingus to Paris), and I was able to finish the thing before my flight boarded. What I did realize just before that, though, is that Apache would still prefer serving WebP, I’d venture a guess that it’s because it’s smaller in size. This is okay for Opera, Firefox and (obviously) Chrome, but not for Safari or IE.

Of course if the other browsers were to actually report the formats they supported it would be all fine, but that’s not the case. In particular, Firefox actually prefers image/png to anything else (Apache will add a low q= value to any glob request, just to be on the safe side, which is why I said earlier that q= is not needed for it), so that even if I don’t make any more changes, Firefox will still prefer PNG to WebP (but won’t do anything for JPEG, so if the web server prefers WebP to JPEG, it’s going to be fine).

So how to provide WebP without breaking other browsers? One solution would be to use PageSpeed to compress the images on the fly to WebP when requested by a compatible browser, but that is a bit of overkill, and is hard to package right, and, most importantly, requires browser detection logic on the server, which is not very safe.

At the end I decided to go with a safer option: provide WebP only to Chrome users and not to users of other browsers, at least until they decide to fix their Accept headers. But how to do that? Well, I needed to check Apache’s source code, because documentation does not seem to explain that clearly and explicitly, but to decide which format to serve, Apache will multiply the q= parameter coming from the browser, or its implicit values (that make image/* and */* have a default value of less than 0.1) with the qs= parameter passed when declaring the type:

AddType image/jpeg .jpeg .jpg .jpe
AddType image/png .png
AddType image/webp;qs=0.9 .webp

By using the value 0.9 to webp, and leaving the default 1 to the other formats, I’m basically telling Apache that, all things equals (like if the browser is sending Accept: */*, Internet Explorer style), I prefer to provide PNG or JPEG to the users, rather than WebP. It will also prefer to serve JPEG to Firefox (which uses image/*). Chrome 30, on the other hand, explicitly prefers WebP over any other image format, and so Apache will calculate the preference as 1.0*0.9 for WebP and 0.8*1.0 for PNG and JPEG. I have not checked what Opera does, but it looks like all the browsers on my cellphone would support WebP but they don’t prefer it, so they won’t be served it either.

So right now WebP images for my blog are an exclusive of Chrome users; the win is relatively good, by halving the size of the Autotools Mythbuster cover on the right, and shaving off a few bytes from the top image for the links. There are definitely more interesting way to save bandwidth by re-compressing the images that I used around the blog (many of which, re-compressed, end up taking half the space), but that will have to wait for me to fix this bloody Typo as right now editing posts is failing.

Another thing that I will have to work on is a tool to handle the re-compression. Right now I’m doing so by hand, and it’s both a waste of time, and prone to errors. I’ll have to come up with a good way to quickly describe images so that a tool can re-compress them and evaluate whether to keep them in WebP or not, and at the same time I need to find a way to store the originals at the highest quality. But that’s a topic for a different time.

The WebP experiment

You might have noticed over the last few days that my blog underwent some surgery, and in particular that some even now, on some browsers, the home page does not really look all that well. In particular, I’ve removed all but one of the background images and replaced them with CSS3 linear gradients. Users browsing the site with the latest version of Chrome, or with Firefox, will have no problem and will see a “shinier” and faster website, others will see something “flatter”, I’m debating whether I want to provide them with a better-looking fallback or not; for now, not.

But this was also a plan B — the original plan I had in mind was to leverage HTTP content negotiation to provide WebP variants of the images of the website. This was a win-win situation because, ludicrous as it was when WebP was announced, it turns out that with its dual-mode, lossy and lossless, it can in one case or the other outperform both PNG and JPEG without a substantial loss of quality. In particular, lossless behaves like a charm with “art” images, such as the CC logos, or my diagrams, while lossy works great for logos, like the Autotools Mythbuster one you see on the sidebar, or the (previous) gradient images you’d see on backgrounds.

So my obvious instinct was to set up content negotiation — I’ve used it before for multiple-language websites, I expected it to work for multiple times as well, as it’s designed to… but after setting all up, it turns out that most modern web browsers still do not support WebP *at all*… and they don’t handle content negotiation as intended. For this to work we need either of two options.

The first, best option, would be for browsers only Accept the image formats they support, or at least prefer them — this is what Opera for Android does: Accept: text/html, application/xml;q=0.9, application/xhtml+xml, multipart/mixed, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1 but that seems to be the only browser doing it properly. In particular, in this listing you’ll see that it supports PNG, WebP, JPEG, GIF and bimap — and then it accepts whatever else with a lower reference. If WebP was not in the list, even if it had an higher preference for the server, it would not be sent to the client. Unfortunately, this is not going to work, as most browsers send Accept: */* without explicitly providing the list of supported image formats. This includes Safari, Chrome, and MSIE.

Point of interest: Firefox does explicit one image format before others: PNG.

The other alternative is for the server to default to the “classic” image formats (PNG, JPEG, GIF) and then expect the browsers supporting WebP prioritizing it over the other image formats. Again this is not the case; as shown above, Opera lists it but does not prioritize, and again, Firefox prioritizes PNG over anything else, and makes no special exception for WebP.

Issues are open at Chrome and Mozilla to improve the support but they haven’t reached mainstream yet. Google’s own suggested solution is to use mod_pagespeed instead — but this module – which I already named in passing in my post about unfriendly projects – is doing something else. It’s on-the-fly changing the content that is provided, based on the reported User-Agent.

Given that I’ve spent some time on user agents, I would say I have the experiences to say that this is a huge pandora’s vase. If I have trouble with some low-development browsers reporting themselves as Chrome to fake their way in with sites that check the user agent field in JavaScript, you can guess how many of those are going to actually support the features that PageSpeed thinks they support.

I’m going to go back to PageSpeed in another post, for now I’ll stop to say that WebP has the numbers to become the next generation format out there, but unless browser developers, as well as web app developers start to get their act straight, we’re going to have hacks over hacks over hacks for the years to come… Currently, my blog is using a CSS3 feature with the standardized syntax — not all browsers understand it, and they’ll see a flat website without gradients; I don’t care and I won’t start adding workarounds for that just because (although I might use SCSS which will fix it for Safari)… new browsers will fix the problem, so just upgrade, or use a sane browser.

User-Agent strings and entropy

It was 2008 when I first got the idea to filter User-Agents as an antispam measure. It worked for quite a while on its own, but recently my ruleset had to come up with more sophisticated fingerprinting to discover spammers. It still works better than a captcha, but it did worsen a bit.

One of the reasons why the User-Agent itself is not enough anymore is that my filtering has been hindered by a more important project. EFF’s Panopticlick has shown that the uniqueness of the strings used in User-Agent is actually an easy way to track a specific user across requests. This got so important, that Mozilla standardized their User-Agents starting with Firefox 4, to reduce their size and thus their entropy. Among other things, the “trail” component has been fixed on the desktop to 20100101 and to the same version as Firefox for the mobile version.

_Unfortunately, Mozilla lies on that page. Not only the trail is not fixed for Firefox Aurora (i.e. the alpha version), which means that my first set of rules was refusing access to all the users of that version, but also their own Lightning extension for SeaMonkey appends to the User-Agent, when they said that it wasn’t supported anymore._

A number of spambots seem to get this wrong, by the way. My guess is that they have some code that generates the User-Agent by adding a bunch of fragments, and make it randomize it, so you can’t just kick a particular agent. Damn smart if you ask me, unfortunately, as ModSecurity hashes the IP collection by remote address and user-agent, so if they cycle different user agents, it’s harder for ModSecurity to understand that it’s actually the same IP address.

I do have some reserves on Mozilla’s handling of identification of extensions. First they say that extensions and plugins should not edit the agent string anymore – but Lightning does! – then they suggest that instead they can send an extra header to identify themselves. But that just means that fingerprinting systems only need to start counting those headers as well as the generic ones that Panopticlick already considers.

On the other hand, other browsers don’t seem to have gotten the memo yet — indeed, both Safari’s and Chrome’s strings are long and include a bunch of almost-independent version numbers (AppleWebKit, Chrome, Safari — and Mobile on the iOS versions). It gets worse on Android, as both the standard browser and Chrome provide a full build identifier, which is not only different from one device to the next, but also from one firmware to the next. Given that each mobile provider has its own builds, I would be very surprised if among my friends I was able to find two with the same identifier in their browsers. Firefox is a bit better on that but it sucks in other ways so I’m not using it as my main browser anymore there.

Browsers on the Kindle Fire

A few days ago I talked about Puffin Browser with the intent to discuss into more details the situation with the browsers on the Kindle Fire tablet I’m currently using.

You might remember that at the end of last year, I decided to replace Amazon’s firmware with a CyanogenMod ROM so to get something useful on it. Beside the lack of access to Google Play, one of the problems I had with Amazon’s original firmware was that the browser that it comes with is flakey to the point of uselessness.

While Amazon’s AppStore does include many of the apps I needed or wanted – including SwiftKey Tablet which is my favourite keyboard for Android – they made it impossible to install them on their own firmware. I’ve been tempted to install their AppStore on the flashed Kindle Fire and see if they would allow me to install the apps then, it would be quite a laugh.

Unfortunately, while the CM10 firmware actually allows me to make a very good use of the device, much more than I could ever have reached with the original firmware, the browsing experience still sucks big time. I’ve currently installed a number of browsers: Android’s stock browser – with its non-compliant requests – Google Chrome, Firefox, Opera and the aforementioned Puffin. There is no real winner on the lot.

The Android browser has a terrible network implementation and takes way too much time requesting and rendering pages. Google Chrome is terrible on the whole, probably because the Fire is too underpowered to run it properly, which makes it totally useless as an app. I only keep it around for testing purposes, until I get a better Android tablet.

Firefox has the best navigation support but every time I click on a field and SwiftKey has to be brought up, it takes a full minute. Whether this is a bug in SwiftKey or Firefox, I have no idea. If someone has an idea who to complain about it to, I’d love to report it and see it fixed.

Best option you get, beside Firefox, is Opera. While slightly slower than Firefox on rendering, it does not suffer from the SwiftKey bug. I’m honestly not sure at this point if the version of Opera I’m using right now renders with their own Presto engine or with WebKit which they announced they are moving to — if it’s the latter, it’s going to be a loss for me I guess, since the two surely WebKit based browsers are not behaving nicely for me here.

Now from what I said about Puffin, you’d expect it to behave properly enough. Unfortunately that is not the case. I don’t know if it’s a problem with my local bandwidth being too limited, but in general the responsiveness is worse than Opera, although not as bad as Puffin. The end result is that even the server-side rendering does not make it usable.

More reviews of software running on the Fire will follow, I suppose, unless I decide to get a newer tablet in the next weeks.

The complexity of request validation

You might have read before that I use a complicated setup with ModSecurity to prevent spam on this blog — I have written about it before extensively, and I also published my rules so that they can be used by other sites (Videolan’s forums are using them as well).

Well, maintaining this ruleset is not that easy, if at all; the problem comes when new browsers are introduced into the mix that makes validating their validity difficult. This is what happened a few months ago when Google first published Chrome for their ICS — which I still don’t have access to, I think I’ll get an HTC One X as soon as I get to California. Well, they did it again with the new Chrome for iOS.

There are three different identifications Chrome can come in as: Chrome, CrMo (for Android) and Crios (for the iOS devices). This simply meant that any special case put in place for Chrome on Android didn’t get auto-extended to the new Chrome on iOS — which is probably intended given that Chrome on iOS has to use the standard WebKit engine of Safari, rather than come up with its own — the only reason to use it is to have synchronised bookmarks with your computer.

Now, though, is when the problems start cropping up: the new Chrome on iOS also has the same problem as the one on ICS: it doesn’t send an Accept header, which is customary for almost every other browser, including the main desktop Chrome builds. So it was a matter of adding Crios to the list of special cases, together with CrMo.

But there is one more issue: there is one feature in the Chrome for iOS interface that allows you to go back to the so-called “desktop interface” — as long as the browser decided to have different interfaces depending on the User-Agent value. What you would expect at that point is for the application to report Chrome as user agent, but it’s not the case. What it reports is instead Safari. The problem is that it still implements some particularity that is generally limited to Chrome, including SDCH, which is something I used to validate before.

So what I ended up doing was removing the support for validation of browsers supporting sdch as an encoding — although I kept the validation that if it reports it’s Chrome, it has to have sdch (unless of course it’s passing through a Proxy). This still makes it possible to workaround most of the non-sophisticated crawlers/tools that try to pass as a browser.

The importance of HTTP request fingerprinting

I started looking at ModSecurity when I wanted to implement a Uesr-Agent based antispam method which has proven time and time again working quite well to the point I started publishing the ruleset which takes care not only of working as an antispam method, as well as a way to avoid tons of bad crawlers from finding my email addresses and so on.

When I first proposed this kind of filtering I received quite a few complains, that the HTTP protocol didn’t define the User-Agent in such a way, but thanks first to EFF’s Panopticlick – demonstrating clearly that the “anonymised” requests are not as anonymous as their perpetrators would expect them to be – and most recently SpiderLabs’s work I am now fully certain that I took the right road.

I’ve spent a bit more work on the rules this week, to make them further resilient to fake the requests such as those coming from scriptkiddies’ tools such as the HOIC tool described in the SpiderLabs’s blog post linked above. One of the most interesting detection I came up with is for real Chrome requests: while it seems to me like Google itself does not leverage it, Chrome as of version 18 is still implementing their own proposed Shared Dictionary Compression for HTTP even though I don’t think it’ll ever be used in the real world. Being the only browser actually requesting such an encoding, I can easily assume a connection between the two — this was only disattended by Epiphany, which in its most recent versions declares to be Chrome… which means you then have a browser claiming to be another (Chrome), which in turn claims to be a third (Safari), which uses an engine (KHTML) claiming to be the same as another (Gecko), all the while declaring it’s all compatible with Mozilla/5.0.

One issue I found while doing this work had to do with Android. For both versions 2 and 3 (is somebody really hoping to use Android 1?), the (default, AOSP) browser sends a full-fledged HTTP request, which among other things include an Accept header. This is what every browser I ever tried does, to the point that ModSecurity’s own Core Rule Set assigns negative points to requests coming without one; in my ruleset it’s further tightened by checking whether the request is purportedly from a known browser, and if so rejecting it if it doesn’t include that header; this worked up to now — note that requests coming through a Proxy, making that explicit through a Via header, are not validated against these checks simply because many proxies are known to muck with the headers.

Anyway as I was saying this is disattended badly by Android 4 (up to 4.0.3, and CyanogenMod as well); it might have started as a way to minimise the bandwidth usage, but for whatever reason in this version, the AOSP browser does not send an Accept reader at all — actually it seems like it dropped most of the headers that it was sending before and that are not strictly necessary for the server to process the request. I could have sworn that Accept was mandatory for the HTTP protocol, but it seems that either I was totally mistaken, or it was only noted in some recommendation that never made it to the standard. The ruleset now exonerates Android 4 from that particular test, but I’m not really too happy about it.

But that’s definitely not the only thing that is out of place with Android. Indeed, if you take an HTC Android device, the browser you open is not the AOSP one, but it’s HTC’s own implementation. This version … does not fully declare itself as an Android device, using a browser compatible with Mobile Safari. Instead, what it reports itself as is a complete Safari, and not in the way that Chrome does it, but by pretending it’s Mac OS X 10.6.3 running on an Intel Mac. Honestly, that’s way crazy to do.

There are a few more things that I hope to be able to handle in my ruleset to make it even tighter, without adding substantial false positives. This means not only fewer spam comments, but also fewer crawlers finding our email addresses, and fewer risks associated with Denial of Service attacks, distributed or not.

If you would like to help with the ruleset, you can find it on Flattr where it’s depressingly stopping at only two clicks. If you would like to use the ruleset, you can find it on GitHub and you can use it for free, obviously.

Google and software mediocrity

I haven’t commented very much, if at all, on most of the new Google projects, which include Chrome, Chromium and Chrome OS; today since I’m currently waiting on a few long-running tasks to complete, I’d like to spend my two eurocents on it.

You can already guess from the title of this post that I’m really sceptical about Google entering the operating system marked; the reason for that is that I haven’t really seen anything in Google strategy that would leave us expecting a very good product from them in this area. While Google is certainly good in providing search services, and GMail is also my email provider of choice, there are quite a few shortcomings that I see in their software and that does not make me count on Chrome OS being any more good that Windows XP is.

First, let’s just say that Google Chrome is not the first software that Google released for the desktop; there has been quite a few other projects before, like for instance Google Talk. Since I have a personal beef with this, I’d like to go on a bit about it. When Google launched their own Instant Message service for the masses, through GMail and a desktop, called Google Talk and base on the XMPP protocol, there has been quite some talk around because, while using the same protocol we know as Jabber, it didn’t connect to the Server-to-Server Jabber network that allows for multiple Jabber servers’ users to communicate; with time this S2S support was added and now a GTalk user can talk with any Jabber user, so as a service, it’s really not bad at all, and you can use any Jabber client to connect to GTalk.

The Windows client, though, seems to be pretty much abandoned, I haven’t seen updates in a while (although I might not have noticed in the past month or two), it lacks quite a few features like merging of multiple usernames in a single contact and stuff like that. Now, at the same time as releasing the Windows client, or about the same time, Google released specifics for their extensions that allow audio (and video?) chat over XMPP-negotiated connection, and a library (libjingle) for other clients to implement this protocol.

The library, unfortunately, ended up having lots of shortcomings, and most projects decided to import and modify it, then it was forked, at least once but I think even twice, cut down and up and so much mangled that it doesn’t probably look anywhere like the original one from Google. And yet, the number of clients that do support GTalk audio/video extension is… I have no idea, Empathy does support it if I recall correctly, but last time I tried, it didn’t really work that well. As far as I know, libpurple, that is used by both Pidgin and Adium, and which would cover clients for all the major operating systems (free or not) does not seem to support them.

Now, why do I consider GTalk a mediocre software does not limit itself to the software that Google provides, it’s a matter of how they played their cards. It seems to me that instead of trying to push themselves as the service provider, they wanted to push themselves as a software provider as well, and the result is that beside Empathy (which is far from an usable client in my opinion), there is no software that seems to be implementing their service properly. They could have implemented, or paid to implement or something like that, their extensions in libpurple and that would have given them an edge; they could have worked with Apple (considering they are working with them closely already) so that iChat could work with GTalk’s audio and video extensions (instead iChat AV from Leopard uses a different protocol that only works between Macs), and so on.

What about Google Chrome? Well when it was announced and released I was blocked in hospital so I lost most of the hype done in the first days; when I finally went to test it, almost a month later, I was surprised at how pointless it seemed to me. Why? Because for what I can see it does not render text as good as Firefox or Safari on Windows, it’s probably faster than them, but then again most people don’t care (at least in Italy, Internet connections are so slow you don’t notice), and there is one important problem: the Google bias of the browser.

I think lots of people criticised the way Microsoft originally treated Internet Explorer and their Internet services before. to the point that now Microsoft allows you to set Google as provider for search in the default install. Well, I don’t see Chrome as anything much different: it’s a browser that is tailored to suit Google’s services, and of course the development of it will suit that too. Will it ever get an advertising block feature, like is available for Firefox, Konqueror and Safari? Probably not because Google takes a good share of revenue out of Internet-based advertising. Will it ever get a delicious extension? Probably not because that’s a Yahoo! service nowadays, and Google has its own alternative.

Now, I don’t want to downplay the important technical innovation of Google chrome, even when they are very basic like the idea of splitting the tabs by process; and indeed I think I have read that Mozilla is now working on implementing a similar feature on the next Firefox major change; this is what we actually get out of the project, not Chromium itself.

Then there is Android; I don’t think I can really comment on this, but at least for what I can see, there is not really much going on with Android: nobody asked me yet if I develop for Android, while I got a few requests for Symbian and iPhone development in the past year or so. Android phones does not seem to shine with the non-technical people, and the technical people at least in Italy are unlikely to pay the price you got to pay to get the Android-based HTC phones with Vodafone and TIM.

By contrasting with Nokia, Google fragmented the software area even more. While Google already provided mobile-optimised services on the web, and some Java-based software to access their services with J2ME-compatible phones, they also started providing applications for Nokia’s Symbian-based phones. Unfortunately this software does not shine, with the exception of Google Maps, which works pretty well and integrates itself with Nokia pretty decently; in particular the “main” Google application for Nokia, crashed twice my E75!, I ended up removing it and living without it (the YouTube application sort of works, the GMail application also “sort of” works, but with the new IMAP client is really pointless to me). So we have mediocre software from Google for Nokia phone, and probably no good reason for Google to improve on it.

But there are also things that haven’t been implemented by Google at all, for instance there is no GTalk client for Nokia phones, or a web-based version for mobile phones, which would have been a killer feature! Instead Nokia implemented its own Nokia Chat, which now became Contacts for Ovi, which also uses XMPP, which also has S2S, but which does not allow you to use GTalk accounts requiring you to have two different users: one for computers and one for the mobile phone. And similarly, with just partially-working Google Sync for Nokia phones, in particular with no support for syncing with the Google Calendar, and with a tremendous loss of detail when syncing contacts, Google loses to Nokia’s Ovi sync support as well.

Now, I’m not a market analyst and I really like to stay away from marketing, but I really don’t see Google as a major player for Software development, I’d really have preferred they started improving the integration of their services with Free Software like Evolution (whose Google Calendar integration sucks way too much, and whose IMAP usage of GMail causes two copies of each sent message to be stored on the server, as well as creating a number of folders/labels that shouldn’t be there at all!), rather than having a new “operating system”.

There are more details I’m sceptic about, like hardware support (of which I’ll leave Mathew Garrett to explain since he knows the matter better) and software support, but for those I’ll wait to see when they actually deliver something.