On Android Launchers

Usual disclaimer, that what I’m writing about is my own opinions, and not those of my employer, and so on.

I have a relationship that is probably best described as love/hate/hate with Android launchers, from the first Android phone I used — the Motorola Milestone, the European version of the Droid. I have been migrating to new launcher apps every year of two, sometimes because I got a new launcher with the firmware (I installed an unofficial CyanogenMod port on the Milestone at some point), or with a new phone (the HTC Desire HD at some point, which also got flashed with CyanogenMod), or simply because I got annoyed with one and try a different one.

I remember for a while I was actually very happy with HTC’s “skin”, which included the launcher, which came with beautiful alpha-blended widgets (a novelty at the time), but I replaced it with, I think, ADW Launcher (the version from the Android Market – what is now the Play Store – not what was on CyanogenMod at that point). I think this was the time when the system apps could not be upgraded via the Store/Market distribution. To make the transition smoother I even ended up looking for widget apps, including a couple of “pro” versions, but at the end of the day grew tired of those as well.

At some point, I think upon suggestion from a colleague, I jumped onto the Aviate launcher, which was unfortunately later bought by Yahoo!. As you can imagine, Yahoo!’s touch was not going to improve the launcher at all, to the point that one day I got annoyed enough I started looking into something else.

Of all the launchers, Aviate is probably the one that looked the most advanced, and I think it’s still one of the most interesting ideas: it had “contextual” pages, with configurable shortcuts and widgets, that could be triggered by time-of-day, or by location. This included the ability, for instance, to identify when you were in a restaurant and show FourSquare and TripAdvisor as the shortcuts.

I would love to have that feature again. Probably even more so now, as the apps I use are even more modal: some of them I only use at home (such as, well, Google Home, the Kodi remote, or Netflix), some of them nearly only on the go (Caffe Nero, Costa, Google Pay, …). Or maybe what I want is Google Now, which does not exist anymore, but let’s ignore that for now.

The other feature that I really liked about Aviate was that it introduced me to the feature that I’ll call jump-to-letter: the Aviate “app drawer” kept apps organised by letter, separated. Which meant you could just tap on the right border of your phone, and you would jump to the right letter. And having the ability to just go to N to open Netflix is pretty handy. Particularly when icons are all mostly the same except for maybe colour.

So when I migrated away from Aviate, I looked for another launcher with a similar jump-to-letter feature, and I ended up finding Action Launcher 3. This is probably the launcher I used the longest; I bought the yearly supporter IAP multiple times because I thought it deserved it.

I liked the idea of backporting the feature of what was originally the Google Now Launcher – nowadays known as the Pixel Launcher – that would allow using the new features announced by Google for their own phones on other phones already on the market. At some point, though, it started pushing the idea of sideloading an APK so that the launcher could also backport the actual Google Now page — it made me very wary and never installed it, it would have needed too many permissions. But it became too pushy when it started updating every week, replacing my default home page with its own widgets. That was too much.

At that point I looked around and found Microsoft Launcher, which was (and is) actually pretty good. While it includes integration for Microsoft services such as Cortana, they kept all the integration optional, so I did set it up with all the features disabled, and kept the stylish launcher instead. With jump-to-letter, and Bing’s lovely daily wallpapers, which are terrific, particularly when they are topical.

It was fairly lightweight, while having useful features, including the ability to hide apps from the drawer, including those that can’t be uninstalled from the phone, or that have an app icon for no reason, such as SwiftKey and Gboard, or many “Pro” license key apps that only launch the primary app.

Unfortunately last month something started going wrong, either because of a beta release or something else, and the Launcher started annoying me. Sometimes I would tap the Home button, and the Launcher would show up with no icons and no dock, the only thing I could do was to go to the Apps settings and force stop it. It also started failing to draw the AIX Weather Widget, which is the only widget I usually have on my personal phone (the work phone has the Calendar on it). I gave up, despite one of the Microsoft folks contacting me on Twitter asking for further details so that they can track down the issues.

I decided to reconsider the previous launchers I used, but I skipped over both Action Launcher (too soon to reconsider I guess) and Aviate (given the current news between Flickr and Tumblr, I’m not sure I trust them — and I didn’t even check to make sure it still is maintained). Instead I went for Nova Launcher, which I used before. It seems to be fairly straightforward, although it lacks the jump-to-letter feature. It worked well enough when I installed it, and it’s very responsive. So I went for that for now. I might reconsider more of them later.

One thing that I noticed, that all three of Action Launcher, Microsoft Launcher, and Nova Launcher do, is to allow you to back up your launcher configuration. But none of them do it through the normal Android backup system, like WhatsApp or Viber. Instead they let you export a configuration file you can reload. I guess it might be so you can copy your home screen from one phone to the other, but… I don’t know, I find it strange.

In any case, if you have suggestions for the best Android launcher, I’m happy to hear them. I’m not set on my way with Nova Launcher, and I’m happy to pay a reasonable amount (up to £10 I would say) for a “Pro” launcher, because I know it’s not cheap to build them. And if any of you know of any “modal” launcher that would allow me to change the primary home screen depending on whether I’m home or not (I don’t particularly need the detail that Aviate used to provide), I would be particularly happy.

Two words about my personal policy on GitHub

I was not planning on posting on the blog until next week, trying to stick on a weekly schedule, but today’s announcement of Microsoft acquiring GitHub is forcing my hand a bit.

So, Microsoft is acquiring GitHub, and a number of Open Source developers are losing their mind, in all possible ways. A significant proportion of comments on this that I have seen on my social media is sounding doomsday, as if this spells the end of GitHub, because Microsoft is going to ruin it all for them.

Myself, I think that if it spells the end of anything, is the end of the one-stop-shop to work on any project out there, not because of anything Microsoft did or is going to do, but because a number of developers are now leaving the platform in protest (protest of what? One company buying another?)

Most likely, it’ll be the fundamentalists that will drop their projects away to GitHub. And depending on what they decide to do with their projects, it might even not show on anybody’s radar. A lot of people are pushing for GitLab, which is both an open-core self-hosted platform, and a PaaS offering.

That is not bad. Self-hosted GitLab instances already exist for VideoLAN and GNOME. Big, strong communities are in my opinion in the perfect position to dedicate people to support core infrastructure to make open source software development easier. In particular because it’s easier for a community of dozens, if not hundreds of people, to find dedicated people to work on it. For one-person projects, that’s overhead, distracting, and destructive as well, as fragmenting into micro-instances will cause pain to fork projects — and at the same time, allowing any user who just registered to fork the code in any instance is prone to abuse and a recipe for disaster…

But this is all going to be a topic for another time. Let me try to go back to my personal opinions on the matter (to be perfectly clear that these are not the opinions of my employer and yadda yadda).

As of today, what we know is that Microsoft acquired GitHub, and they are putting Nat Friedman of Xamarin fame (the company that stood behind the Mono project after Novell) in charge of it. This choice makes me particularly optimistic about the future, because Nat’s a good guy and I have the utmost respect for him.

This means I have no intention to move any of my public repositories away from GitHub, except if doing so would bring a substantial advantage. For instance, if there was a strong community built around medical devices software, I would consider moving glucometerutils. But this is not the case right now.

And because I still root most of my projects around my own domain, if I did move that, the canonical URL would still be valid. This is a scheme I devised after getting tired of fixing up where unieject ended up with.

Microsoft has not done anything wrong with GitHub yet. I will give them the benefit of the doubt, and not rush out of the door. It would and will be different if they were to change their policies.

Rob’s point is valid, and it would be a disgrace if various governments would push Microsoft to a corner requiring it to purge content that the smaller, independent GitHub would have left alone. But unless that happens, we’re debating hypothetical at the same level of “If I was elected supreme leader of Italy”.

So, as of today, 2018-06-04, I have no intention of moving any of my repositories to other services. I’ll also use a link to this blog with no accompanying comment to anyone who will suggest I should do so without any benefit for my projects.

How you can tell you’re dealing with a bunch of fanboys

In my previous post where I criticised Linus’s choice of bumping the kernel’s version to 3 without thinking through the kind of problems we, as distributors, would have faced with broken build systems that rely on the output of uname command, I expected mixed reactions, but mostly I thought it would have brought in technical arguments.

Turns out that the first comment was actually in support of the breakage for the sake of finding bugs, while another (the last at the time of writing), shows the presence of what, undeniably, is a fanboy. A Linux (or Linus) one at it, but still a fanboy. And yes, there are other kinds of fanboys, beside Apple’s. And of the two comments, the former is the one I actually respect.

So how do you spot fanboy’s of all trades? Well, first look for people who stick with one product, or one manufacturer. Be it Apple, Lenovo, Dell, or in the case of software, Canonical, Free Software Foundation, KDE or Linus himself, sticking with a single supplier without even opening to the idea that others have done something good is an obvious sign of being a fanboy.

Now, it is true that I don’t like having things of many different vendors as they tend to work better together when they are from the same, but that’s not to say I can’t tell what else is good from another vendor. For instance, after two Apple laptops and an iMac, I didn’t have to stay with Apple… I decided to get a Dell, and that’s what I’m using right now. Similarly, even though I liked Nokia’s phone, my last two phones were a Motorola and, nowadays, an HTC.

Then make sure to notice whether they can’t accept flaws in the product or decisions. Indeed one of the most obnoxious behaviours in Apple’s fanboys, who tend to justify all the choices of the company as something done right. Well, here is the catch: not all of them are! Now, part of this is underscored in the next tract, but it is important to understand that for a fanboy even what would be a commercial failure, able to bring a company near bankruptcy, is a perfect move, and was just misunderstood by the market.

Again, this is not limited to Apple fanboys; it shouldn’t be so difficult to identify a long list of Nokia fanboys who keep supporting their multi-headed workforce investment strategy of maintaining a number of parallel operating systems and classes of devices, in spite of a negative market response… and I’m talking about those who are not to gain directly from said strategy — I’m not expecting the people being laid off, or those whose tasks are to be reassigned from their favourite job, to be unsupportive of said strategy of course.

But while they are so defensive of their love affair, fanboys also can’t see anything good in what their competitors do. And this is unfortunately way too common in the land of Free Software supporters: for them Sony is always evil, Microsoft never does anything good, Apple is only out to make crappy designs, and so on.

This is probably the most problematic situation: since you can’t accept that the other manufacturers (or the other products) have some good sides to them, you will not consider improvements in the same way. This is why just saying that anybody claiming Apple did something good is a fanboy is counterproductive: let’s look at what they do right, even if it’s not what we want (they are after all making decisions based on their general strategy, that is certainly different from the Free Software general strategy).

And finally, you’re either with them or against them. Which is what the comment that sprouted the discussion shows. You’re either accepting their exact philosophy or you’re an enemy, just an enemy. In this case, I just had to suggest that Linus’s decision was made without thinking of our (distributors) side, and I became an enemy who should use some other projects.

With all this on the table, can you avoid becoming a fanboy yourself? I’m always striving to make sure I avoid that, I’m afraid many people don’t seem to accept that.

Know thy competitor

I don’t like the use of the word “enemy” when it comes to software development, as it adds some sort of religious feeling to something that should only be matter of business and pragmatism. Just so you know.

You almost certainly know that I’m a Free Software developer. And if you followed me for long enough, you also most likely know that I’ve had my stint working with Ruby and Rails, even though I haven’t worked in that area for a very long time and, honestly, I’d still prefer staying away from that.

I have criticised a number of aspects of Rails development before, mostly due to my work on the new Ruby packaging framework for Gentoo that has shown the long list of bad practices still applied in developing Ruby extensions designed to be used by Rails applications. I think the climax of my disappointment with Rails-related development was reached when I was looking at Hobo which was supposed to be some sort of RAD environment for Rails applications, and turned out to complicating the use of non-standard procedure way more than Rails itself.

It could then be seen as ironic that, after all this, my current line of work includes developing for the Microsoft ASP.NET platform. Duh! As for why I’m doing this: money is good, and the customer is a good one, and lately I’ve been quite in need for stable customers.

A note here: I’m actually considering moving away from development as main line of work and get into the “roaming sysadmin” field. Out of the recent customers I got, development tends to take too much time, especially as even the customers themselves are not sure how they want things done, and are unable to accept limitations and compromises for most of the situations. System administration at least only require me to do the job as quickly as possible and as neat as possible..

This is not the first time I have to work with Microsoft technologies; I spent my time on .NET and Mono before, and earlier this year I had to learn WPF and I’ve always admitted when Microsoft’s choice are actually better than some Free Software projects’ ones. Indeed, I like the way they designed the C# language itself, and WPF is quite cool in the way it works, even though I find it a bit too verbose for my tastes.

But with ASP.NET I suddenly remembered why I prefer Free Software. Rails and Hobo come nowhere near the badness of ASP.NET! Not only the syntax for the aspx files, which is a mix of standard html and custom tags, is so verbose that it’s not even funny (why oh why every tag need to contain runat="server", when no other alternative is presented, is something I’ll never understand), but even the implementation of the details in the backend is stupid.

Take for instance the Accordion “control”, which is supposed to allow adding collapsible panels to a web page without having to play with JavaScript manually, so that the page does not even have to carry the content of the panes when they are not to be displayed (kinda cool when you have lots of data to be displayed). These controls have a sub-control that is the AccordionPane, which in turn has a Header and a Content. I was expecting the “Accordion’s AccordionPane’s Header” would have a CSS class to identify it by default, so that you could apply styles over it quickly.. the answer is nope. If you want to have a CSS class on the header, you got to set a property on the AccordionPane’s control (which means once per sub-pane), so that it is exported later on. Lovely.

And let’s not forget that if you wish to actually develop an externally-accessible application, to test it on different devices than your own computer, you only have the choice of using IIS itself (the quick’n’dirty webserver that Visual Studio let you use cannot be configured to listen to something else than localhost)… and to make it possible to publish the content to the local IIS you got to run Visual Studio with administrator privileges (way to go UAC!).

Compared to this, I can see why Rails has had so much success…

Wasting a CrowningMomentOfIToldYouSo

Last week, Skype has been having a bit of trouble; well, quite a bit of trouble. That’s the kind of trouble that make you very angry with your service provider, until you think twice and remember you’re probably not paying for it — at least, that’s what should happen for most people. Yes I know there are people who pay for Skype, but I’m pretty sure that most of those complaining, don’t; for a simple reason: if you’re paying for a service and such service does not work, you do not bitch on the net, you get to the customer care and demand your money back.

For whatever reason – which mostly relates to the human instinct for seeing conspiracies everywhere they can – people blamed Microsoft for it even though that is virtually impossible to be the cause, heck even the acquisition is not complete yet!

It would have been a good time to show users how relying on a proprietary, close-garden technology without any reliability assurance such as Skype is not the smartest business move. But no, a number of people, including some self-appointed Free Software advocates, preferred once again painting Microsoft as the Big Evil, the One True Ruler and so on. And nevermind if that means that Skype has always been a proprietary, closed, patented technology; it was good just because they made a Linux client! Alas.

Now, there possibly be another chance to get that crowning moment (geek points to those who guess where my title comes from): if Microsoft really were to drop support for Skype on platforms they don’t control. Right now you can use Skype on Windows, Linux, OS X, Android, iPhone, PSP (3000 and Go models only), Symbian, some TVs, a number of hardphones and so on. Rumours want Microsoft ready to cut down all these accesses to be the only ones controlling the technology. I’d expect otherwise.

While it is difficult to argue that Microsoft cares much about Linux (they definitely care more about OS X than they do Linux), it seems suicidal for Microsoft to take away the one feature that keeps most of the Skype users attached to it: omnipresence. Wherever you are, you have Skype, which is why even I keep on using it (even though I have a number of backup options). Microsoft seems to know what it means to be interoperable with Linux, from time to time, as it should be noted with them helping Novell working on Moonlight to have compatibility with Silverlight.

But facts shouldn’t get in the way of strong opinions when it comes to Microsoft, as people who should know better prefer to paint them as a single-minded, evil corporation, with the aggravating quality of being incompetent and suicidal. I’ll be clear here and say out loud that trying to paint Bill Gates as Elliot Carver is borderline insane.

First of all, trying to paint any corporation as single-minded shows that they never had to deal with one. In any relatively big company or project, not having multiple heads and directions would be impossible. This is why Microsoft can produce utter crap as well as decent stuff, fail badly or show off cool technology such as the Kinect. But again, you can’t even argue that they did a decent job at providing clear API for their XBox that you get painted as being on their payroll as they couldn’t possibly get anything right. Talk about echo-chambers, uh?

On the other hand, I don’t have any reason to expect Microsoft to do the obvious marketing move; there are a number of possible moves, and one might very well be to drop support for non-Microsoft platforms from the new version of their software, or at least of their protocol, as unlikely as I think it to be. Will that be bad for Linux or for Free Software? Only if we argue that losing the proprietary Skype client is bad — which we could only do if we also accepted that software might be proprietary; I do accept that, but the same advocates above doesn’t always sound that way.

What we could do instead is get ready for when Skype could collapse due to Microsoft’s action, and show that it is possible to have an alternative. But having an alternative does not mean merely trying to reverse engineer the protocol, means getting our act together and find a decent way to have videochat in Linux without going crazy — I haven’t tried pidgin in a while, but last time it didn’t let me configure neither the audio nor video input, which would get wrong.

While I know there are enough developers who are working on this, I also expect advocates, and their sites, wasting the chance of making good publicity for Free Software and instead prefer playing the blame game, as pictured above. Gotta love reality, uh?

Why do FLOSS advocates like Adobe so much?

I’m not sure how this happens, but I see more and more often FLOSS advocates that support Adobe, and in particular Flash, in almost any context out there, mostly because they are now appearing a lot like an underdog, with Microsoft and Apple picking on them. Rather than liking the idea of cornering Flash as a proprietary software product out of the market, they seem to acclaim any time Adobe gets a little more advantage over the competition, and cry foul when someone else tries to ditch them:

  • Microsoft released Silverlight; which is evil – probably because it’s produced by Microsoft, or in alternative because it uses .NET that is produced by Microsoft – we have a Free as in Speech implementation of it in Novell’s Moonlight; but FLOSS advocates ditch on that: it’s still evil, because there are patents in .NET and C#; please note that the only implementation I know of Flash in the FLOSS world is Gnash which is not exactly up-to-speed with the kind of Flash applets you find in the wild;
  • Apple’s iPhone and iPad (or rather, all the Apple devices based on iPhone OS iOS) don’t support Flash, and Apple pushes content publishers to move to “modern alternatives” starting from the <video> tag; rather than, for once, agreeing with Apple and supporting that idea, FLOSS advocates decide to start name-calling them because they lack support for an ubiquitous technology such as Flash — the fact that Apple’s <video> tag suggestions were tied to the use of H.264 shouldn’t have made any difference at all, since Flash does not support Theora, so with the exclusion of the recently released WebM in the latest 10.1 version of the Flash Player, there wouldn’t be any support for “Free formats”;
  • Adobe stirs up a lot of news declaring support for Android; Google announces Android 2.2 Froyo, supporting Flash; rather than declaring Google an enemy of Free Software for helping Adobe spread their invasive and proprietary technology, FLOSS advocates start issuing “take that” comments toward iPhone users as “their phone can see Flash content”;
  • Mozilla refuses to provide any way at all to view H.264 files directly in their browser, leaving users unable to watch Youtube without Flash unless they do a ton of hacky tricks to convert the content into Ogg/Theora files; FLOSS advocates keep on supporting them because they haven’t compromised;

What is up here? Why should people consider Adobe a good friend of Free Software at all? Maybe because they control formats that are usually considered “free enough”: PostScript, TIFF (yes they do), PDF… or because some of the basic free fonts that TeX implementations and the original X11 used come from them. But all of this doesn’t really sound relevant to me: they don’t provide a Free Software PDF implementation, rather they have their own PDF reader, while the Free implementations often have to run fast towards, with mixed results, to keep opening new PDF files. As much as Mike explains the complexity of it all, the Linux Flash player is far from being a nice piece of software, and their recent abandon of the x86-64 version of the player makes it even more sour.

I’m afraid that the only explanation I can give to this phenomenon is that most “FLOSS advocates” line themselves straight with, and only with, the Free Software Foundation. And the FSF seem to have a very personal war against Microsoft and Apple; probably because the two of them actually show that in many areas Free Software is still lagging behind (and if you don’t agree with this statement, please have a reality check and come back again — and this is not to say that Free Software is not good in many areas, or that it cannot improve to become the best), which goes against their “faith”. Adobe on the other hand, while not really helping Free Software out (sorry but Flash Player and Adobe Reader are not enough to say that they “support” Linux; and don’t try to sell me that they are not porting Creative Suite to Linux just so people would use better Free alternatives).

Why do I feel like taking a shot at FSF here? Well, I have already repeated multiple times that I love the PDFreaders.org site from the FSFe; as far as I can see, FSF only seem to link to it in one lost and forgotten page, just below a note about CoreBoot … doesn’t make it any prominent. Also, I couldn’t find any open letter that blame PDF for being a Patent-risky format, which instead is present in the PDFreaders site:

While Adobe Systems grants a royalty-free use of any patents to the PDF format, in any application that adheres to the PDF specifications, other companies do hold patents that may limit the openness of the standard if enforced.

As you can see, the first part of the sentence admits that there are patents over the PDF format, but royalty-free use is granted… from Adobe at least, but nothing from eventual other parties that might have them.

At any rate, I feel like there is a huge double-standard issue here: anything that comes out of Microsoft or Apple, even with Free Software licenses or patent pledges is evil; but proprietary software and technologies from Adobe are fine. It’s silly, don’t you think so?

And for those who still would like to complain about websites requiring Silverlight to watch content, I’d like to propose a different solution to ask for: don’t ask for them to provide it with Flash, but rather with a standard protocol, for which we have a number of Free Software implementations, as well as being supported on the mainstream operating systems for both Desktops and mobile phones: RTSP is such a protocol.

Sometimes it’s really just about what’s shinier

Recently, I bought an Xbox 360 (Elite) unit, to replace my now-dead PlayStation 3 (yes I’ll replace that as well, but for now this option was cheaper, and I can borrow a few games from a friend of mine this way). Please don’t start with the whole “Micro$soft” crap, and learn to attack your adversary on proper (technical) ground rather than with slurs and similar.

Besides, I can’t see any reason why any of the three current-generation consoles is better than any other for what concerns Free Software ideals: sure they do use some open source software in their products (PS3, PSP and Sony Bravia TVs) but as far as I can see they don’t give much back in term of new software, nor they seem to support Free Software that could somewhat work with their hardware (like a proper Free DLNA implementation, that would be something very welcome by PS3 and Bravia users). Even the one thing that PS3 had that the others lacked – support for installing Linux using PPC64 and the Cell Broadband Engine to develop for IBM’s new platform – was dropped out of the new “Slim” model.

I also have to say now that even when I’m taking time off I end up thinking about the technical details, to the point that my friends do dislike me a bit when I start decomposing the way things are implemented in games; probably just as much as I disliked my friend the amateur director when he decomposed the films we see together — on the other hand, after helping him out with his own production, I’m much more resilient to that and I actually started to take a liking to watch the special content of DVDs and BluRays where they do the same. So with this in mind, I did make some consideration about the Xbox 360 and the PlayStation 3, and how they fare in comparison, from what I can tell in my point of view.

For some reasons, I always have seen the Xbox having a worse graphic engine than the PlayStation 3; this was somewhat supported by my friend who owns one because he had it hooked up to an old, standard definition CRT, rather than to a modern Hihh Definition LCD, like I had the PlayStation 3 set up. With this in mind, I definitely thought of the Xbox as a “lower” console; on the other hand I soon noticed, after connecting it to my system, that it fares pretty well in comparison during game play (I’m saying this looking at Star Ocean The Last Hope — gotta love second hand games stores!), so what might have brought this (at least here) common mistake about Xbox’s graphics being worse?

  • the original Xbox models, especially the Arcade entry-level one, lacked HDMI support; while even the PlayStation 3 ships with just the worse cable possible (video composite), it has at least out-of-the-box support for standard HDMI cable which are both cheap and easy to find;
  • the only two cables supporting High Definition resolutions for the original models are VGA and video component cables; the former is unlikely to be supported by lower-end HD LCDs – like the one my friend bought a few months ago – and also depends on having a proper optical audio input port to feed the sound; the latter is difficult to find as only one store out of ten that sell games and consoles in my area had some available;
  • since a lot of people bought the entry-level version to spend as little as possible, it’s very likely that a lot of them didn’t want to spend an extra 30 euro to get the cable, by the way, which means lots of them still play in standard definition;
  • even those who spent money to get the cable, might not get the best graphics available; I got the cable for my friend as Xmas gift (note: I’m using the name Xmas just to note that it is mostly a convention for me, being an atheist – and my friend as well – I don’t care much), and he was enthusiast about the improvement; it was just a couple of weeks later that I found he didn’t configure the console to output in Full HD resolution through the component cable;
  • the Dashboard menu is not in HD quality; it might sound petty to note that, but it does strike as odd to have these heavily aliased fonts, and blurry icons on top of an HD-quality game render – such as the above-noted Star Ocean, or Fable 2 – especially when it happens for a trophy an achievement reached;
  • cutscenes are the killers! While the renders are pretty much on par, if not better than the PlaStation 3, the pre-rendered full-motion videos are a different story: Sony can make use of the huge storage provided by the 50GB BluRay discs, while Microsoft has to live with 4GB DVDs; this does not only mean that you end up with 3-disc games, like Star Ocean, that need to get fully installed on the hard drive (which is, by the way, optional for the entry-level system), but also that they cannot just put minutes over minutes of HD FMVs, and end up compressing them; the opening sequence of Star Ocean shows this pretty well: the DVD-quality video is duly noted, especially when compared with the rest of the awesome game renderings; luckily, the in-game cutscenes are rendered instead.

So why am I caring about noting these petty facts? Well, there is one lesson to be learned in that as well; Microsoft’s choices about the system impacted on its general reputation: not providing HDMI support, requiring many extra additional accessory over the basic system (high definition cable; hard drive), and not supporting standard upgrades (you need Xbox-specific storage to back-up and copy saves around, and you cannot increase the system’s storage, while Sony allows you to use USB mass storage devices for copy – and backup – operations, as well as having user-serviceable hard drives). A system that might have been, on many areas, better is actually considered lower-end by many, many people.

No matter how many technical reasons you have to win, you might still fail if you don’t consider what people will say about your system! And that includes the people who won’t be bothered to learn manuals, instructions, and documentation. This is one thing that Linux developers, and advocates, need to learn pretty well from others, before being crushed by learning that the hard way.

And as a final note, I got the Xbox for many reasons, among which, as I stated above, was the chance to borrow some games from a friend rather than outright buying them; on the whole experience, though, I think I still like the PS3 better. It’s more expensive, and sometimes it glitches badly in graphics and physics (Fallout 3, anybody?), but there are many reasons for which it’s better. The Xbox is much more noisy – even when installing the games to hard drive – to begin with, and then the PlayStation 3 plays BluRay, does not need line-of-sight for the remote control, does not require special cables to charge the wireless controllers. I think the system is generally better, although Xbox got more flak than it should, at least from the people I know around here, for the above-noted problems.

(Mis)feature by (mis)feature porting

There is one thing that doesn’t upset me a half as much as it should, likely because I’m almost never involved in end-user software development nowadays (although it can be found in back-end software as well): feature-by-feature “ports” (or rather, re-implementations).

Say there is a hugely-known, widely-used proprietary software, and lots of people feel like that a free alternative to that software is needed (which happens pretty often, to be honest, and is the driving force for the Free Software movement, in my opinion); you have two main roads, among a gazillion of possible choices, that you can take: you try to focus on the the use cases for the software or you can re-implement it feature-by-feature. I learnt, through experience, that the former case is always better than the latter.

When I talk about experience, I don’t mean the user experience but rather the actual experience of coding such ports. A long time ago, one of my first projects with Qt (3) under Linux was a try at porting the ClrMame Pro tool (for Windows) — Interestingly enough, I cannot find the homepage of the tool on Google, I rather get the usual spam trap links from the search. My reason to try re-implementing that software, at the time, was that I used to be a huge MAME player (with just a couple of ROMs) and that the program didn’t work fine under Wine (and the few tries I took at fixing Wine didn’t work out as well as I’d have hoped — yet I think a few of my patches made it through to Wine, although I doubt the code persists today).

Feature-by-feature porting is usually far from easy, especially for closed-source applications, because you try to deduce the internal working from the external interface (be it user interface or programming interface) and that rarely works out as good as you would like. Given this is often called reinventing the wheel, you should consider this like trying to reinvent the wheel after being given just a cart without wheels, looking at the way they should connect. For open source software, this is obviously easier to do.

Now, while there are so many software out there that make the same mistake, I’d like to look first at one that, luckily, ended up breaking off from the feature-by-feature idea and started working on a different method, albeit slowly and still being tied too much, in my opinion, to the original concept: Evolution. Those who used the first few versions of Evolution might remember that it clearly, and unbearably tried to imitate, feature-by-feature, Microsoft Outlook 2000. The same icon pane on the left-side, same format for the contacts’ summary, and same modules. The result is … not too appealing, I’d say. As I said the original concept creeps in today as well, as you still have essentially the same modules: mail, contacts, calendar, tasks and notes, the last two being those that I find quite pointless today (especially considering the presence of Tomboy and GNote). A similar design can be found in KDE’s Kontact “shell” around the separated components of the PIM package.

On the other hand, I’d like to pick up a different, proprietary effort: Apple’s own PIM suite. While they tend to integrate their stuff quite tightly, they also have taken a quite different approach for their own programs: Apple’s Mail, iCal and Address Book. They are three different applications, they share the information they store, one with the other (so that you can send and receive meeting invites through Mail, picking up the contacts’ emails), but they have widely different, sometimes inconsistent interface when you put one near the other. On the other hand, each interface seem to have its sense, and in my opinion ends up faring pretty well on the usability scale. What it does not try to do is what Microsoft did, that is forcing the same base graphical interface over a bunch of widely different use cases.

It shouldn’t then surprise that the other case of feature-by-feature (or in this case, misfeature-by-misfeature) port, is again attached to Microsoft from the “origin” end: OpenOffice. Of course, it is true that the original implementation for it comes from a different product (StarOffice) that didn’t really have the kind of “get the same” approach that Evolution and other projects have taken, I guess. On the other hand, they seem to keep going that way, at least to me.

The misfeature that brought me to write this post today is a very common one: automatic hyperlink transformation of URLs and email addresses… especially email addresses. If I consider the main target result from OpenOffice, I’d expect printed material (communications, invoices, and so on) should be up on the top. And in that kind of products you definitely don’t need, nor want, those things hyperlinked; they would not be useful and would be mostly unusable. Even if you do produce PDFs out if it (which supports hyperlinks), I don’t think that just hyperlinking everything with an at-character on it would be a sane choice. As I have been made aware, one of the most likely reason for OpenOffice to do that is that… Word does. But why does Word in the first place?

It’s probably either of two. At the time of Office 2000 (or was it 97? I said 97 before on identi.ca, but thinking for a bit, it might have been 2000 instead), Microsoft tried to push Word as a “web editor”: the first amateur websites started to crop around, and FrontPage was still considered much more top-level than Word; having auto-hyperlinking there was obviously needed. The other option is about the same time, when Microsoft tried to push Word as … Outlook’s mail editor (do you remember the time when you received mail from corporate contacts that was only an attached .doc file?).

So in general, the fact that any other software has a feature does not really justify implementing some feature on a new one. Find why the feature would be useful, and then consider it again.

Can we be not ready for a technology?

İsmail is definitely finding me some topics to write about lately… this time it was in relation to a tweet of mine ranting on about Wave’s futility, I think I should elaborate a bit about this topic.

Regarding the Wave rant, which adds to my first impression posted a few weeks ago, I think things are starting to go downhill. From one side, more and more people started having Google Wave so you can find people to talk with, from the other, of the Waves I received, only one was actually interesting (but still nothing that makes me feel like Wave was useful), the rest falls into two categories: from one side, you get the ping tests, which I admit I also caused – because obviously the first thing you do in something like Wave is pinging somebody you feel comfortable to talk with – and on the other hand I had three different waves of people… discussing Wave itself.

And you know that there is a problem when the medium is mostly used to discuss itself.

And here is where me and İsmail diverge: for him the problem is that “we’re not ready” for the Wave technology; myself, I think that the phrase “we’re not ready” only can come out of a sci-fi book, and that there is something wrong with the technology if people don’t seem to find a reason to use it at all. But I agree with him when he says that some technologies, like Twitter, would have looked definitely silly and out of place a few years ago. I agree because we have had a perfect example that is not hypothetical at all.

You obviously all do know Apple’s Dashboard, from which even the idea of Plasma for KDE seems to have come from, and from which Microsoft seemingly borrowed heavily for the Vista and Win7 desktop. Do you think Apple was the first to think about that stuff? Think again.

It was 1997, and Microsoft released Internet Explorer 4, showing off the Active Desktop … probably one of the biggest failures in their long-running career. The underlying idea is not far at all from that of Apple’s “revolutionary” Dashboard: pieces of web pages to put on your desktop. At the same time. Microsoft released one of their first free development kits: Visual Basic 5 Control Creation Edition (VB5CCE) that allowed you to learn their VB language, and while you couldn’t compile applications to redistribute, you could compile ActiveX controls, which could in turn be used by the Active Desktop page fragments. Yes, I did use VB5CCE; it was what let me make the jump from the good old QBasic to Windows “programming”.

So, if the whole concept of Dashboard (and Plasma, and so on) makes people so happy now, why did it fail at the time? Well, to use İsmail’s words “we weren’t ready for it”; or to use mine, the infrastructure wasn’t ready. At the time, lots of users were still not connected to any network, especially outside of the US; staying connected costed, a lot, and bandwidth was limited, as were the resources of the computers. Those of us (me included) who at the time had no Internet connection at all, were feeling deprived of resources for something totally useless for them; those who had dial-up Internet connections would feel their bandwidth be eaten up by something they probably didn’t care enough about.

Who was at fault here? Us for not wanting such nonsense running on our already underpowered computers, or Microsoft for wanting to push out a technology without proper infrastructure support? Given the way Apple was acclaimed when they brought Dashboard to their computers, I’d say the latter, and they actually paid the price of pushing something out a few years too early. Timing might not be everything, but it’s definitely something.

Driver hell — when will it stop?

To get some extra pocket money to spend in the everyday maintenance of my systems, I also ended up working on maintenance of Windows computers on a daily basis; it’s not extraordinarily bad, and it usually doesn’t take me more than a day for a single computer even if it’s the first time I see it (once I’ve seen it once, I already know what to expect).

Unfortunately, it’s not always feasible to convert people to Linux yet; although I think I might start soon enough at least with a few people whose only use of a computer is to “browse websites, send email, watch a movie from time to time”. To make the task easier I obviously set up systems with Firefox and Thunderbird, VLC and OpenOffice, so that at least some programs can be found on the ”new“ systems when they migrate.

Unfortunately, it seems like Windows, especially Windows XP, a lot of my customers have OEM licenses for, has become a driver hell just like it was in the old days. And vendors don’t seem to make that much easier. Most vendors providing complete systems tend not to care about their users enough to provide downloads for the drivers (they just tell you to use their recovery partition; guess what? that stuff often doesn’t work extremely well, if at all, and in one instance it was even mounted as a drive on the normal OS… which meant it was infected too!), and the components’ manufacturer have websites that calling complex would be euphemistic:

  • ATI/AMD website is a mess to navigate; while they do (or did) chipsets too, you cannot really find a “chipset drivers” section; if you have an older version of a motherboard that is supported by legacy drivers you’ve got to navigate at least four pages before you can find out!
  • Asus website is a mess of javascript; whenever you ask to download something you have to tell them the operating system you’re looking for… – even for BIOS updates – the window is centered on the screen and does not work on cellphones, and of course once I could have used a cellphone just fine if it wasn’t for that (given that Asus boards usually can update the bios through USB sticks); no matter that half the time, whatever operating system you select, the same stuff is given you;
  • Intel website is also a labyrinth; to download some driver you got to search for the right class of software, then decide you got one in particular, and it often proposes you two options, then you have to agree to the license and again click download… that does not download the thing but rather redirects you to a page that calls a javascript to download the file; such javascript can sometimes not work at all, so they provide you with the usual ”if the file does not download, click here“; but rather than being a direct link, it’s also a javascript function; checking the function, it lists a clear bouncer link (which you could download with wget, too!), but with a little more presence of mind, you can notice that the link is _provided as a GET-dparameter to the (dynamic, at this point) page on Intel’s server; much easier to copy that out and drop the rest I’d say;
  • Realtek’s website sometimes does not work properly; on the other hand they give you direct FTP links so once you know the FTP server you can find the drivers just fine avoiding the website; would have been nicer to split it down for driver type so that the listing wouldn’t take a few minutes, but I have to say is the system that works better; even if FTP does make me feel like we’re back in the early ‘90s;
  • almost all download sites tend to have pretty slow connections, or capped connections; I can understand Asus, Gigabyte and Realtek that have their main server in Taiwan or so it would seem, but what about Intel? Luckily at least ATI and nVidia (that have the biggest driver packs) have very fast servers.

Then there are other problems like trying to understand that ”ATI Technologies, Inc. SBx00 Azalia” is actually the name reported by lspci for a Realtek Azalia coded that needs the HDA drivers from Realtek; or trying to guess the driver version, or the driver’s name, from the downloaded files, that often enough don’t have any kind of naming or versioning scheme. Again ATI (for quite a long time) and nVidia (recently) solved this in a pretty nice way: thei use their logo for the install executable; this does not make it very manageable under Linux though, given that nautilus doesn’t show (yet) the PE icon (maybe I can modify it to load the PE file, and extract the icon?).

Let’s just hope that Microsoft’s moves with Vista and Windows 7 will be a trampoline for Linux for the masses; I sincerely count more on Microsoft’s changes than Google OS as I’ve noted since Vista already gave us something useful for Linux.