Don’t Ignore Windows 10 as a Development Platform for FLOSS

Important Preface: This blog post was written originally on 2020-05-12, and scheduled for later publication, inspired by this short Twitter thread. As such it well predates Microsoft’s announcement of expanding support of WSL2 to graphical apps. I considered trashing, or seriously re-editing the blog post in the light of the announcement, but I honestly lack the energy to do that now. It left a bad taste in my mouth to know that it will likely get drowned out in the noise of the new WSL2 features announcement.

Given the topic of this post I guess I need to add a preface to point out my “FLOSS creds” — because I have seen already too many attacks to people who even use Windows at all. I have been an opensource developer for over fifteen years now, and part of the reason why I left my last bubble was because it made it difficult for me to contribute to various opensource projects. I say this because I’m clearly a supporter of Free Software and Open Source, wherever possible. I also think that’s different people have different needs, and that ignoring that is a failure of the FLOSS movement as a whole.

The “Year of Linux on the Desktop” is now a meme that has been running its course to the point of being annoying. Despite what FLOSS advocates keep saying, “Linux on the Desktop” is not really moving, and while I do have some strong opinions on this, that’s for another day. Most users, and in particular newcomers to FLOSS (both as users and developers) are probably using a more “user friendly” platform — if you leave a comment with the joke on UNIX being selective with its friends, you’ll end up on a plonkfile, be warned.

About ten years ago, it seemed like the trend was for FLOSS developers to use MacBooks as their daily laptops. I did that for a while myself — an UNIX-based platform with all the tools of the trade, which allowed quite a bit of work being done without having access to a Linux platform. SSH, Emacs, GCC, Ruby, and so on. And at the same time, you had the stability of Mac OS X, with the battery life and all the hardware worked great out of the box. But then more recently, Apple’s move towards “walled gardens” seemed to be taking away from this feasibility.

But back to the main topic. Over the past many years, I’ve been using a “mixed setup” — using a Linux laptop (or more recently desktop) for development, and a Windows (7, then 10) desktop for playing games, editing photos, designing PCBs, and for logic analysis. The latter is because Saleae Logic takes a significant amount of RAM when analysing high-frequency signals, and I have been giving my gamestations as much RAM as I can just for Lightroom, so it makes sense to run it on the machine with 128GB of RAM.

But more recently I have been exploring the ability of using Windows 10 as a development platform. In part because my wife has been learning Python, and since also learning a new operating system and paradigm at the same time would have been a bloody mess, she’s doing so on Windows 10 using Visual Studio Code and Python 3 as distributed through the Microsoft Store. While helping her, I had exposure to Windows as a Python development platform, so I gave it a try when working on my hack to rename PDF files, which turned out to be quite okay for a relatively simple workflow. And the work on the Python extension keeps making it more and more interesting — I’m not afraid to say that Visual Studio Code is better integrated with Python than Emacs, and I’m a long-time user of Emacs!

In the last week I have actually stepped up further how much development I’m doing on Windows 10 itself. I have been using HyperV virtual machines for Ghidra, to make use of the bigger screen (although admittedly I’m just using RDP to connect to the VM so it doesn’t really matter that much where it’s running), and in my last dive into the Libre 2 code I felt the need to have a fast and responsive editor to go through executing part of the disassembled code to figure out what it’s trying to do — so once again, Visual Studio Code to the rescue.

Indeed, Windows 10 now comes with an SSH client, and Visual Studio Code integrates very well with it, which meant I could just edit the files saved in the virtual machine and have the IDE also build them with GCC and executing them to get myself an answer.

Then while I was trying to use packetdiag to prepare some diagrams (for a future post on the Libre 2 again), I found myself wondering how to share files between computers (to use the bigger screen for drawing)… until I realised I could just install the Python module on Windows, and do all the work there. Except for needing sed to remove an incorrect field generated in the SVG. At which point I just opened my Debian shell running in WSL, and edited the files without having to share them with anything. Uh, score?

So I have been wondering, what’s really stopping me from giving up my Linux workstation for most of the time? Well, there’s hardware access — glucometerutils wouldn’t really work on WSL unless Microsoft is planning a significant amount of compatibility interfaces to be integrated. Similar for using hardware SSH tokens — despite PC/SC being a Windows technology to begin with. Screen and tabulated shells are definitely easier to run on Linux right now, but I’ve seen tweets about modern terminals being developed by Microsoft and even released FLOSS!

Ironically, I think it’s editing this blog that is the most miserable experience for me on Windows. And not just because of the different keyboard (as I share the gamestation with my wife, the keyboard is physically a UK keyboard — even though I type US International), but also because I miss my compose key. You may have noticed already that this post is full of em-dashes and en-dashes. Yes, I have been told about WinCompose, but last time I tried using it, it didn’t work and even screwed up my keyboard altogether. I’m now trying it again, at least on one of my computers, and if it doesn’t explode in my face again, I may just give it another try later.

And of course it’s probably still not as easy to set up a build environment for things like unpaper (although at that point, you can definitely run it in WSL!), or to have a development environment for actual Windows applications. But this is all a matter of different set of compromises.

Honestly speaking, it’s very possible that I could survive with a Windows 10 laptop for my on-the-go opensource work, rather than the Linux one I’ve been using. With the added benefit of being able to play Settlers 3 without having to jump through all the hoops from the last time I tried. Which is why I decided that the pandemic lockdown is the perfect time to try this out, as I barely use my Linux laptop anyway, since I have a working Linux workstation all the time. I have indeed reinstalled my Dell XPS 9360 with Windows 10 Pro, and installed both a whole set of development tools (Visual Studio Code, Mu Editor, Git, …) and a bunch of “simple” games (Settlers, Caesar 3, Pharaoh, Age of Empires II HD); Discord ended up in the middle of both, since it’s actually what I use to interact with the Adafruit folks.

This doesn’t mean I’ll give up on Linux as an operating system — but I’m a strong supporter of “software biodiversity”, so the same way I try to keep my software working on FreeBSD, I don’t see why it shouldn’t work on Windows. And in particular, I always found that providing FLOSS software on Windows a great way to introduce new users to the concept of FLOSS — focusing more on providing FLOSS development tools means giving an even bigger chance for people to build more FLOSS tools.

So is everything ready and working fine? Far from it. There’s a lot of rough edges that I found myself, which is why I’m experimenting with developing more on Windows 10, to see what can be improved. For instance, I know that the reuse-tool has some rough edges with encoding of input arguments, since PowerShell appears to still not default to UTF-8. And I failed to use pre-commit for one of my projects — although I have not taken notice yet much of what failed, to start fixing it.

Another rough edge is in documentation. Too much of it assumes only a UNIX environment, and a lot of it, if it has any support for Windows documentation at all, assumes “old school” batch files are in use (for instance for Python virtualenv support), rather than the more modern PowerShell. This is not new — a lot of times modern documentation is only valid on bash, and if you were to use an older operating system such as Solaris you would find yourself lost with the tcsh differences. You can probably see similar concerns back in the days when bash was not standard, and maybe we’ll have to go back to that kind of deal. Or maybe we’ll end up with some “standardization” of documentation that can be translated between different shells. Who knows.

But to wrap this up, I want to give a heads’ up to all my fellow FLOSS developers that Windows 10 shouldn’t be underestimated as a development platform. And that if they intend to be widely open to contributions, they should probably give a thought of how their code works on Windows. I know I’ll have to keep this in mind for my future.

Blog Redirects, Azure Style

Last year, I set up an AppEngine app to redirect the old blog’s URLs to the WordPress install. It’s a relatively simple Flask web application, although it turned out to be around 700 lines of code (quite a bit to just serve redirects). While it ran fine for over a year on Google Cloud without me touching anything, and fitting into the free tier, I had to move it, as part of my divestment from GSuite (which is only vaguely linked to me leaving Google).

I could have just migrated the app on a new consumer account for AppEngine, but I decided to try something different, to avoid the bubble, and to compare other offerings. I decided to try Azure, which is Microsoft’s cloud offering. The first impressions were mixed.

The good thing of the Flask app I used for redirection being that simple is that nothing ties it to any one provider: the only things you need are a Python environment, and the ability to install the requests module. For the same codebase to work on AppEngine and Azure, though, there seems to be a need for a simple change. Both providers appear to rely on Gunicorn, but AppEngine appears to be looking for an object called app in the main module, while Azure is looking for it in the application module. This is trivially solved by defining the whole Flask app inside application.py and having the following content in main.py (the command line support is for my own convenience):

#!/usr/bin/env python3

import argparse

from application import app


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--listen_host', action='store', type=str, default='localhost',
        help='Host to listen on.')
    parser.add_argument(
        '--port', action='store', type=int, default=8080,
        help='Port to listen on.')

    args = parser.parse_args()

    app.run(host=args.listen_host, port=args.port, debug=True)

The next problem I encountered was with the deployment. While there’s plenty of guides out there to use different builders to set up the deployment on Azure, I was lazy and went straight for the most clicky one, which used GitHub Actions to deploy from a (private) GitHub repository straight into Azure, without having to install any command line tools (sweet!) Unfortunately, I hit a snag in the form of what I think is a bug in the Azure GitHub Action template.

You see, the generated workflow for the deployment to Azure is pretty much zipping up the content of the repository, after creating a virtualenv directory to install the requirements defined for it. But while the workflow creates the virtualenv in a directory called env, the default startup script for Azure is looking for it in a directory called antenv. So for me it was failing to start until I changed the workflow to use the latter:

    - name: Install Python dependencies
      run: |
        python3 -m venv antenv
        source antenv/bin/activate
        pip install -r requirements.txt
    - name: Zip the application files
      run: zip -r myapp.zip .

Once that problem was solved, the next issue was to figure out how to set up the app on its original domain and have it serve TLS connections as well. This turned out to be a bit more complicated than expected because I had set up CAA records in my DNS configuration to only allow Let’s Encrypt, but Microsoft uses DigiCert to provide the (short lived) certificates, so until I removed that it wouldn’t be able to issue (oops.)

After everything is set up, here’s a few more of the differences between the two services, that I noticed.

First of all, Azure does not provide IPv6, although since they use CNAME records this can change at any time in the future. This is not a big deal for me, not only because the IPv6 is still dreamland, but also because the redirection would point to WordPress, that does not support IPv6. Nonetheless, it’s an interesting point to make, that despite Microsoft having spent years preparing for IPv6 support, and having even run Teredo tunnels, they also appear to not be ready to provide modern service entrypoints.

Second, and related, it looks like on Azure there’s a DNAT in front of the requests sent to Gunicorn — all the logs show the requests coming from 172.16.0.1 (a private IP address). This is opposite to AppEngine that shows the actual request IP in the log. It’s not a huge deal, but it does make it a bit annoying to figure out if there’s someone trying to attack your hostname. It also makes it funny that it’s not supporting IPv6, given it does not appear to need for the application itself to support the new addresses.

Speaking of logs, GCP exposes structured request logs. This is a pet peeve of mine, which GCP appears to at least make easier to deal with. In general, it allows you to filter logs much more easily to find out instances of requests being terminated with an error status, which is something that I paid close attention to in the weeks after deploying the original AppEngine redirector: I wanted to make sure my rewriting code didn’t miss some corner cases that users were actually hitting.

I couldn’t figure out how to get a similar level of detail in Azure, but honestly I have not tried too hard right now, because I don’t need that level of control for the moment. Also, while there does seem to be an entry in the portal’s menu to query logs, when I try it out I get a message «Register resource provider ‘Microsoft.Insights’ for this subscription to enable this query» which suggests to me it might be a paid extra.

Speaking of paid, the question of costs is something that clearly needs to be kept in clear sight, particularly given recent news cycles. Azure seems to provide a 12 months free trial, but it also gives you £150 of credit for 14 days, which don’t seem to match up properly to me. I’ll update the blog post (or write a new one) with more details after I have some more experience with the system.

I know that someone will comment complaining that I shouldn’t even consider Cloud Computing as a valid option. But honestly, from what I can see, I will be likely running a couple more Cloud applications out there, rather than keep hosting my own websites, and running my own servers. It’s just more practical, and it’s a different trade-off between costs and time spent maintaining thing, so I’m okay with it going this way. But I also want to make sure I don’t end up locking myself into a single provider, with no chance of migrating.

On Android Launchers

Usual disclaimer, that what I’m writing about is my own opinions, and not those of my employer, and so on.

I have a relationship that is probably best described as love/hate/hate with Android launchers, from the first Android phone I used — the Motorola Milestone, the European version of the Droid. I have been migrating to new launcher apps every year of two, sometimes because I got a new launcher with the firmware (I installed an unofficial CyanogenMod port on the Milestone at some point), or with a new phone (the HTC Desire HD at some point, which also got flashed with CyanogenMod), or simply because I got annoyed with one and try a different one.

I remember for a while I was actually very happy with HTC’s “skin”, which included the launcher, which came with beautiful alpha-blended widgets (a novelty at the time), but I replaced it with, I think, ADW Launcher (the version from the Android Market – what is now the Play Store – not what was on CyanogenMod at that point). I think this was the time when the system apps could not be upgraded via the Store/Market distribution. To make the transition smoother I even ended up looking for widget apps, including a couple of “pro” versions, but at the end of the day grew tired of those as well.

At some point, I think upon suggestion from a colleague, I jumped onto the Aviate launcher, which was unfortunately later bought by Yahoo!. As you can imagine, Yahoo!’s touch was not going to improve the launcher at all, to the point that one day I got annoyed enough I started looking into something else.

Of all the launchers, Aviate is probably the one that looked the most advanced, and I think it’s still one of the most interesting ideas: it had “contextual” pages, with configurable shortcuts and widgets, that could be triggered by time-of-day, or by location. This included the ability, for instance, to identify when you were in a restaurant and show FourSquare and TripAdvisor as the shortcuts.

I would love to have that feature again. Probably even more so now, as the apps I use are even more modal: some of them I only use at home (such as, well, Google Home, the Kodi remote, or Netflix), some of them nearly only on the go (Caffe Nero, Costa, Google Pay, …). Or maybe what I want is Google Now, which does not exist anymore, but let’s ignore that for now.

The other feature that I really liked about Aviate was that it introduced me to the feature that I’ll call jump-to-letter: the Aviate “app drawer” kept apps organised by letter, separated. Which meant you could just tap on the right border of your phone, and you would jump to the right letter. And having the ability to just go to N to open Netflix is pretty handy. Particularly when icons are all mostly the same except for maybe colour.

So when I migrated away from Aviate, I looked for another launcher with a similar jump-to-letter feature, and I ended up finding Action Launcher 3. This is probably the launcher I used the longest; I bought the yearly supporter IAP multiple times because I thought it deserved it.

I liked the idea of backporting the feature of what was originally the Google Now Launcher – nowadays known as the Pixel Launcher – that would allow using the new features announced by Google for their own phones on other phones already on the market. At some point, though, it started pushing the idea of sideloading an APK so that the launcher could also backport the actual Google Now page — it made me very wary and never installed it, it would have needed too many permissions. But it became too pushy when it started updating every week, replacing my default home page with its own widgets. That was too much.

At that point I looked around and found Microsoft Launcher, which was (and is) actually pretty good. While it includes integration for Microsoft services such as Cortana, they kept all the integration optional, so I did set it up with all the features disabled, and kept the stylish launcher instead. With jump-to-letter, and Bing’s lovely daily wallpapers, which are terrific, particularly when they are topical.

It was fairly lightweight, while having useful features, including the ability to hide apps from the drawer, including those that can’t be uninstalled from the phone, or that have an app icon for no reason, such as SwiftKey and Gboard, or many “Pro” license key apps that only launch the primary app.

Unfortunately last month something started going wrong, either because of a beta release or something else, and the Launcher started annoying me. Sometimes I would tap the Home button, and the Launcher would show up with no icons and no dock, the only thing I could do was to go to the Apps settings and force stop it. It also started failing to draw the AIX Weather Widget, which is the only widget I usually have on my personal phone (the work phone has the Calendar on it). I gave up, despite one of the Microsoft folks contacting me on Twitter asking for further details so that they can track down the issues.

I decided to reconsider the previous launchers I used, but I skipped over both Action Launcher (too soon to reconsider I guess) and Aviate (given the current news between Flickr and Tumblr, I’m not sure I trust them — and I didn’t even check to make sure it still is maintained). Instead I went for Nova Launcher, which I used before. It seems to be fairly straightforward, although it lacks the jump-to-letter feature. It worked well enough when I installed it, and it’s very responsive. So I went for that for now. I might reconsider more of them later.

One thing that I noticed, that all three of Action Launcher, Microsoft Launcher, and Nova Launcher do, is to allow you to back up your launcher configuration. But none of them do it through the normal Android backup system, like WhatsApp or Viber. Instead they let you export a configuration file you can reload. I guess it might be so you can copy your home screen from one phone to the other, but… I don’t know, I find it strange.

In any case, if you have suggestions for the best Android launcher, I’m happy to hear them. I’m not set on my way with Nova Launcher, and I’m happy to pay a reasonable amount (up to £10 I would say) for a “Pro” launcher, because I know it’s not cheap to build them. And if any of you know of any “modal” launcher that would allow me to change the primary home screen depending on whether I’m home or not (I don’t particularly need the detail that Aviate used to provide), I would be particularly happy.

Two words about my personal policy on GitHub

I was not planning on posting on the blog until next week, trying to stick on a weekly schedule, but today’s announcement of Microsoft acquiring GitHub is forcing my hand a bit.

So, Microsoft is acquiring GitHub, and a number of Open Source developers are losing their mind, in all possible ways. A significant proportion of comments on this that I have seen on my social media is sounding doomsday, as if this spells the end of GitHub, because Microsoft is going to ruin it all for them.

Myself, I think that if it spells the end of anything, is the end of the one-stop-shop to work on any project out there, not because of anything Microsoft did or is going to do, but because a number of developers are now leaving the platform in protest (protest of what? One company buying another?)

Most likely, it’ll be the fundamentalists that will drop their projects away to GitHub. And depending on what they decide to do with their projects, it might even not show on anybody’s radar. A lot of people are pushing for GitLab, which is both an open-core self-hosted platform, and a PaaS offering.

That is not bad. Self-hosted GitLab instances already exist for VideoLAN and GNOME. Big, strong communities are in my opinion in the perfect position to dedicate people to support core infrastructure to make open source software development easier. In particular because it’s easier for a community of dozens, if not hundreds of people, to find dedicated people to work on it. For one-person projects, that’s overhead, distracting, and destructive as well, as fragmenting into micro-instances will cause pain to fork projects — and at the same time, allowing any user who just registered to fork the code in any instance is prone to abuse and a recipe for disaster…

But this is all going to be a topic for another time. Let me try to go back to my personal opinions on the matter (to be perfectly clear that these are not the opinions of my employer and yadda yadda).

As of today, what we know is that Microsoft acquired GitHub, and they are putting Nat Friedman of Xamarin fame (the company that stood behind the Mono project after Novell) in charge of it. This choice makes me particularly optimistic about the future, because Nat’s a good guy and I have the utmost respect for him.

This means I have no intention to move any of my public repositories away from GitHub, except if doing so would bring a substantial advantage. For instance, if there was a strong community built around medical devices software, I would consider moving glucometerutils. But this is not the case right now.

And because I still root most of my projects around my own domain, if I did move that, the canonical URL would still be valid. This is a scheme I devised after getting tired of fixing up where unieject ended up with.

Microsoft has not done anything wrong with GitHub yet. I will give them the benefit of the doubt, and not rush out of the door. It would and will be different if they were to change their policies.

Rob’s point is valid, and it would be a disgrace if various governments would push Microsoft to a corner requiring it to purge content that the smaller, independent GitHub would have left alone. But unless that happens, we’re debating hypothetical at the same level of “If I was elected supreme leader of Italy”.

So, as of today, 2018-06-04, I have no intention of moving any of my repositories to other services. I’ll also use a link to this blog with no accompanying comment to anyone who will suggest I should do so without any benefit for my projects.

How you can tell you’re dealing with a bunch of fanboys

In my previous post where I criticised Linus’s choice of bumping the kernel’s version to 3 without thinking through the kind of problems we, as distributors, would have faced with broken build systems that rely on the output of uname command, I expected mixed reactions, but mostly I thought it would have brought in technical arguments.

Turns out that the first comment was actually in support of the breakage for the sake of finding bugs, while another (the last at the time of writing), shows the presence of what, undeniably, is a fanboy. A Linux (or Linus) one at it, but still a fanboy. And yes, there are other kinds of fanboys, beside Apple’s. And of the two comments, the former is the one I actually respect.

So how do you spot fanboy’s of all trades? Well, first look for people who stick with one product, or one manufacturer. Be it Apple, Lenovo, Dell, or in the case of software, Canonical, Free Software Foundation, KDE or Linus himself, sticking with a single supplier without even opening to the idea that others have done something good is an obvious sign of being a fanboy.

Now, it is true that I don’t like having things of many different vendors as they tend to work better together when they are from the same, but that’s not to say I can’t tell what else is good from another vendor. For instance, after two Apple laptops and an iMac, I didn’t have to stay with Apple… I decided to get a Dell, and that’s what I’m using right now. Similarly, even though I liked Nokia’s phone, my last two phones were a Motorola and, nowadays, an HTC.

Then make sure to notice whether they can’t accept flaws in the product or decisions. Indeed one of the most obnoxious behaviours in Apple’s fanboys, who tend to justify all the choices of the company as something done right. Well, here is the catch: not all of them are! Now, part of this is underscored in the next tract, but it is important to understand that for a fanboy even what would be a commercial failure, able to bring a company near bankruptcy, is a perfect move, and was just misunderstood by the market.

Again, this is not limited to Apple fanboys; it shouldn’t be so difficult to identify a long list of Nokia fanboys who keep supporting their multi-headed workforce investment strategy of maintaining a number of parallel operating systems and classes of devices, in spite of a negative market response… and I’m talking about those who are not to gain directly from said strategy — I’m not expecting the people being laid off, or those whose tasks are to be reassigned from their favourite job, to be unsupportive of said strategy of course.

But while they are so defensive of their love affair, fanboys also can’t see anything good in what their competitors do. And this is unfortunately way too common in the land of Free Software supporters: for them Sony is always evil, Microsoft never does anything good, Apple is only out to make crappy designs, and so on.

This is probably the most problematic situation: since you can’t accept that the other manufacturers (or the other products) have some good sides to them, you will not consider improvements in the same way. This is why just saying that anybody claiming Apple did something good is a fanboy is counterproductive: let’s look at what they do right, even if it’s not what we want (they are after all making decisions based on their general strategy, that is certainly different from the Free Software general strategy).

And finally, you’re either with them or against them. Which is what the comment that sprouted the discussion shows. You’re either accepting their exact philosophy or you’re an enemy, just an enemy. In this case, I just had to suggest that Linus’s decision was made without thinking of our (distributors) side, and I became an enemy who should use some other projects.

With all this on the table, can you avoid becoming a fanboy yourself? I’m always striving to make sure I avoid that, I’m afraid many people don’t seem to accept that.

Know thy competitor

I don’t like the use of the word “enemy” when it comes to software development, as it adds some sort of religious feeling to something that should only be matter of business and pragmatism. Just so you know.

You almost certainly know that I’m a Free Software developer. And if you followed me for long enough, you also most likely know that I’ve had my stint working with Ruby and Rails, even though I haven’t worked in that area for a very long time and, honestly, I’d still prefer staying away from that.

I have criticised a number of aspects of Rails development before, mostly due to my work on the new Ruby packaging framework for Gentoo that has shown the long list of bad practices still applied in developing Ruby extensions designed to be used by Rails applications. I think the climax of my disappointment with Rails-related development was reached when I was looking at Hobo which was supposed to be some sort of RAD environment for Rails applications, and turned out to complicating the use of non-standard procedure way more than Rails itself.

It could then be seen as ironic that, after all this, my current line of work includes developing for the Microsoft ASP.NET platform. Duh! As for why I’m doing this: money is good, and the customer is a good one, and lately I’ve been quite in need for stable customers.

A note here: I’m actually considering moving away from development as main line of work and get into the “roaming sysadmin” field. Out of the recent customers I got, development tends to take too much time, especially as even the customers themselves are not sure how they want things done, and are unable to accept limitations and compromises for most of the situations. System administration at least only require me to do the job as quickly as possible and as neat as possible..

This is not the first time I have to work with Microsoft technologies; I spent my time on .NET and Mono before, and earlier this year I had to learn WPF and I’ve always admitted when Microsoft’s choice are actually better than some Free Software projects’ ones. Indeed, I like the way they designed the C# language itself, and WPF is quite cool in the way it works, even though I find it a bit too verbose for my tastes.

But with ASP.NET I suddenly remembered why I prefer Free Software. Rails and Hobo come nowhere near the badness of ASP.NET! Not only the syntax for the aspx files, which is a mix of standard html and custom tags, is so verbose that it’s not even funny (why oh why every tag need to contain runat="server", when no other alternative is presented, is something I’ll never understand), but even the implementation of the details in the backend is stupid.

Take for instance the Accordion “control”, which is supposed to allow adding collapsible panels to a web page without having to play with JavaScript manually, so that the page does not even have to carry the content of the panes when they are not to be displayed (kinda cool when you have lots of data to be displayed). These controls have a sub-control that is the AccordionPane, which in turn has a Header and a Content. I was expecting the “Accordion’s AccordionPane’s Header” would have a CSS class to identify it by default, so that you could apply styles over it quickly.. the answer is nope. If you want to have a CSS class on the header, you got to set a property on the AccordionPane’s control (which means once per sub-pane), so that it is exported later on. Lovely.

And let’s not forget that if you wish to actually develop an externally-accessible application, to test it on different devices than your own computer, you only have the choice of using IIS itself (the quick’n’dirty webserver that Visual Studio let you use cannot be configured to listen to something else than localhost)… and to make it possible to publish the content to the local IIS you got to run Visual Studio with administrator privileges (way to go UAC!).

Compared to this, I can see why Rails has had so much success…

Wasting a CrowningMomentOfIToldYouSo

Last week, Skype has been having a bit of trouble; well, quite a bit of trouble. That’s the kind of trouble that make you very angry with your service provider, until you think twice and remember you’re probably not paying for it — at least, that’s what should happen for most people. Yes I know there are people who pay for Skype, but I’m pretty sure that most of those complaining, don’t; for a simple reason: if you’re paying for a service and such service does not work, you do not bitch on the net, you get to the customer care and demand your money back.

For whatever reason – which mostly relates to the human instinct for seeing conspiracies everywhere they can – people blamed Microsoft for it even though that is virtually impossible to be the cause, heck even the acquisition is not complete yet!

It would have been a good time to show users how relying on a proprietary, close-garden technology without any reliability assurance such as Skype is not the smartest business move. But no, a number of people, including some self-appointed Free Software advocates, preferred once again painting Microsoft as the Big Evil, the One True Ruler and so on. And nevermind if that means that Skype has always been a proprietary, closed, patented technology; it was good just because they made a Linux client! Alas.

Now, there possibly be another chance to get that crowning moment (geek points to those who guess where my title comes from): if Microsoft really were to drop support for Skype on platforms they don’t control. Right now you can use Skype on Windows, Linux, OS X, Android, iPhone, PSP (3000 and Go models only), Symbian, some TVs, a number of hardphones and so on. Rumours want Microsoft ready to cut down all these accesses to be the only ones controlling the technology. I’d expect otherwise.

While it is difficult to argue that Microsoft cares much about Linux (they definitely care more about OS X than they do Linux), it seems suicidal for Microsoft to take away the one feature that keeps most of the Skype users attached to it: omnipresence. Wherever you are, you have Skype, which is why even I keep on using it (even though I have a number of backup options). Microsoft seems to know what it means to be interoperable with Linux, from time to time, as it should be noted with them helping Novell working on Moonlight to have compatibility with Silverlight.

But facts shouldn’t get in the way of strong opinions when it comes to Microsoft, as people who should know better prefer to paint them as a single-minded, evil corporation, with the aggravating quality of being incompetent and suicidal. I’ll be clear here and say out loud that trying to paint Bill Gates as Elliot Carver is borderline insane.

First of all, trying to paint any corporation as single-minded shows that they never had to deal with one. In any relatively big company or project, not having multiple heads and directions would be impossible. This is why Microsoft can produce utter crap as well as decent stuff, fail badly or show off cool technology such as the Kinect. But again, you can’t even argue that they did a decent job at providing clear API for their XBox that you get painted as being on their payroll as they couldn’t possibly get anything right. Talk about echo-chambers, uh?

On the other hand, I don’t have any reason to expect Microsoft to do the obvious marketing move; there are a number of possible moves, and one might very well be to drop support for non-Microsoft platforms from the new version of their software, or at least of their protocol, as unlikely as I think it to be. Will that be bad for Linux or for Free Software? Only if we argue that losing the proprietary Skype client is bad — which we could only do if we also accepted that software might be proprietary; I do accept that, but the same advocates above doesn’t always sound that way.

What we could do instead is get ready for when Skype could collapse due to Microsoft’s action, and show that it is possible to have an alternative. But having an alternative does not mean merely trying to reverse engineer the protocol, means getting our act together and find a decent way to have videochat in Linux without going crazy — I haven’t tried pidgin in a while, but last time it didn’t let me configure neither the audio nor video input, which would get wrong.

While I know there are enough developers who are working on this, I also expect advocates, and their sites, wasting the chance of making good publicity for Free Software and instead prefer playing the blame game, as pictured above. Gotta love reality, uh?

Why do FLOSS advocates like Adobe so much?

I’m not sure how this happens, but I see more and more often FLOSS advocates that support Adobe, and in particular Flash, in almost any context out there, mostly because they are now appearing a lot like an underdog, with Microsoft and Apple picking on them. Rather than liking the idea of cornering Flash as a proprietary software product out of the market, they seem to acclaim any time Adobe gets a little more advantage over the competition, and cry foul when someone else tries to ditch them:

  • Microsoft released Silverlight; which is evil – probably because it’s produced by Microsoft, or in alternative because it uses .NET that is produced by Microsoft – we have a Free as in Speech implementation of it in Novell’s Moonlight; but FLOSS advocates ditch on that: it’s still evil, because there are patents in .NET and C#; please note that the only implementation I know of Flash in the FLOSS world is Gnash which is not exactly up-to-speed with the kind of Flash applets you find in the wild;
  • Apple’s iPhone and iPad (or rather, all the Apple devices based on iPhone OS iOS) don’t support Flash, and Apple pushes content publishers to move to “modern alternatives” starting from the <video> tag; rather than, for once, agreeing with Apple and supporting that idea, FLOSS advocates decide to start name-calling them because they lack support for an ubiquitous technology such as Flash — the fact that Apple’s <video> tag suggestions were tied to the use of H.264 shouldn’t have made any difference at all, since Flash does not support Theora, so with the exclusion of the recently released WebM in the latest 10.1 version of the Flash Player, there wouldn’t be any support for “Free formats”;
  • Adobe stirs up a lot of news declaring support for Android; Google announces Android 2.2 Froyo, supporting Flash; rather than declaring Google an enemy of Free Software for helping Adobe spread their invasive and proprietary technology, FLOSS advocates start issuing “take that” comments toward iPhone users as “their phone can see Flash content”;
  • Mozilla refuses to provide any way at all to view H.264 files directly in their browser, leaving users unable to watch Youtube without Flash unless they do a ton of hacky tricks to convert the content into Ogg/Theora files; FLOSS advocates keep on supporting them because they haven’t compromised;

What is up here? Why should people consider Adobe a good friend of Free Software at all? Maybe because they control formats that are usually considered “free enough”: PostScript, TIFF (yes they do), PDF… or because some of the basic free fonts that TeX implementations and the original X11 used come from them. But all of this doesn’t really sound relevant to me: they don’t provide a Free Software PDF implementation, rather they have their own PDF reader, while the Free implementations often have to run fast towards, with mixed results, to keep opening new PDF files. As much as Mike explains the complexity of it all, the Linux Flash player is far from being a nice piece of software, and their recent abandon of the x86-64 version of the player makes it even more sour.

I’m afraid that the only explanation I can give to this phenomenon is that most “FLOSS advocates” line themselves straight with, and only with, the Free Software Foundation. And the FSF seem to have a very personal war against Microsoft and Apple; probably because the two of them actually show that in many areas Free Software is still lagging behind (and if you don’t agree with this statement, please have a reality check and come back again — and this is not to say that Free Software is not good in many areas, or that it cannot improve to become the best), which goes against their “faith”. Adobe on the other hand, while not really helping Free Software out (sorry but Flash Player and Adobe Reader are not enough to say that they “support” Linux; and don’t try to sell me that they are not porting Creative Suite to Linux just so people would use better Free alternatives).

Why do I feel like taking a shot at FSF here? Well, I have already repeated multiple times that I love the PDFreaders.org site from the FSFe; as far as I can see, FSF only seem to link to it in one lost and forgotten page, just below a note about CoreBoot … doesn’t make it any prominent. Also, I couldn’t find any open letter that blame PDF for being a Patent-risky format, which instead is present in the PDFreaders site:

While Adobe Systems grants a royalty-free use of any patents to the PDF format, in any application that adheres to the PDF specifications, other companies do hold patents that may limit the openness of the standard if enforced.

As you can see, the first part of the sentence admits that there are patents over the PDF format, but royalty-free use is granted… from Adobe at least, but nothing from eventual other parties that might have them.

At any rate, I feel like there is a huge double-standard issue here: anything that comes out of Microsoft or Apple, even with Free Software licenses or patent pledges is evil; but proprietary software and technologies from Adobe are fine. It’s silly, don’t you think so?

And for those who still would like to complain about websites requiring Silverlight to watch content, I’d like to propose a different solution to ask for: don’t ask for them to provide it with Flash, but rather with a standard protocol, for which we have a number of Free Software implementations, as well as being supported on the mainstream operating systems for both Desktops and mobile phones: RTSP is such a protocol.

Sometimes it’s really just about what’s shinier

Recently, I bought an Xbox 360 (Elite) unit, to replace my now-dead PlayStation 3 (yes I’ll replace that as well, but for now this option was cheaper, and I can borrow a few games from a friend of mine this way). Please don’t start with the whole “Micro$soft” crap, and learn to attack your adversary on proper (technical) ground rather than with slurs and similar.

Besides, I can’t see any reason why any of the three current-generation consoles is better than any other for what concerns Free Software ideals: sure they do use some open source software in their products (PS3, PSP and Sony Bravia TVs) but as far as I can see they don’t give much back in term of new software, nor they seem to support Free Software that could somewhat work with their hardware (like a proper Free DLNA implementation, that would be something very welcome by PS3 and Bravia users). Even the one thing that PS3 had that the others lacked – support for installing Linux using PPC64 and the Cell Broadband Engine to develop for IBM’s new platform – was dropped out of the new “Slim” model.

I also have to say now that even when I’m taking time off I end up thinking about the technical details, to the point that my friends do dislike me a bit when I start decomposing the way things are implemented in games; probably just as much as I disliked my friend the amateur director when he decomposed the films we see together — on the other hand, after helping him out with his own production, I’m much more resilient to that and I actually started to take a liking to watch the special content of DVDs and BluRays where they do the same. So with this in mind, I did make some consideration about the Xbox 360 and the PlayStation 3, and how they fare in comparison, from what I can tell in my point of view.

For some reasons, I always have seen the Xbox having a worse graphic engine than the PlayStation 3; this was somewhat supported by my friend who owns one because he had it hooked up to an old, standard definition CRT, rather than to a modern Hihh Definition LCD, like I had the PlayStation 3 set up. With this in mind, I definitely thought of the Xbox as a “lower” console; on the other hand I soon noticed, after connecting it to my system, that it fares pretty well in comparison during game play (I’m saying this looking at Star Ocean The Last Hope — gotta love second hand games stores!), so what might have brought this (at least here) common mistake about Xbox’s graphics being worse?

  • the original Xbox models, especially the Arcade entry-level one, lacked HDMI support; while even the PlayStation 3 ships with just the worse cable possible (video composite), it has at least out-of-the-box support for standard HDMI cable which are both cheap and easy to find;
  • the only two cables supporting High Definition resolutions for the original models are VGA and video component cables; the former is unlikely to be supported by lower-end HD LCDs – like the one my friend bought a few months ago – and also depends on having a proper optical audio input port to feed the sound; the latter is difficult to find as only one store out of ten that sell games and consoles in my area had some available;
  • since a lot of people bought the entry-level version to spend as little as possible, it’s very likely that a lot of them didn’t want to spend an extra 30 euro to get the cable, by the way, which means lots of them still play in standard definition;
  • even those who spent money to get the cable, might not get the best graphics available; I got the cable for my friend as Xmas gift (note: I’m using the name Xmas just to note that it is mostly a convention for me, being an atheist – and my friend as well – I don’t care much), and he was enthusiast about the improvement; it was just a couple of weeks later that I found he didn’t configure the console to output in Full HD resolution through the component cable;
  • the Dashboard menu is not in HD quality; it might sound petty to note that, but it does strike as odd to have these heavily aliased fonts, and blurry icons on top of an HD-quality game render – such as the above-noted Star Ocean, or Fable 2 – especially when it happens for a trophy an achievement reached;
  • cutscenes are the killers! While the renders are pretty much on par, if not better than the PlaStation 3, the pre-rendered full-motion videos are a different story: Sony can make use of the huge storage provided by the 50GB BluRay discs, while Microsoft has to live with 4GB DVDs; this does not only mean that you end up with 3-disc games, like Star Ocean, that need to get fully installed on the hard drive (which is, by the way, optional for the entry-level system), but also that they cannot just put minutes over minutes of HD FMVs, and end up compressing them; the opening sequence of Star Ocean shows this pretty well: the DVD-quality video is duly noted, especially when compared with the rest of the awesome game renderings; luckily, the in-game cutscenes are rendered instead.

So why am I caring about noting these petty facts? Well, there is one lesson to be learned in that as well; Microsoft’s choices about the system impacted on its general reputation: not providing HDMI support, requiring many extra additional accessory over the basic system (high definition cable; hard drive), and not supporting standard upgrades (you need Xbox-specific storage to back-up and copy saves around, and you cannot increase the system’s storage, while Sony allows you to use USB mass storage devices for copy – and backup – operations, as well as having user-serviceable hard drives). A system that might have been, on many areas, better is actually considered lower-end by many, many people.

No matter how many technical reasons you have to win, you might still fail if you don’t consider what people will say about your system! And that includes the people who won’t be bothered to learn manuals, instructions, and documentation. This is one thing that Linux developers, and advocates, need to learn pretty well from others, before being crushed by learning that the hard way.

And as a final note, I got the Xbox for many reasons, among which, as I stated above, was the chance to borrow some games from a friend rather than outright buying them; on the whole experience, though, I think I still like the PS3 better. It’s more expensive, and sometimes it glitches badly in graphics and physics (Fallout 3, anybody?), but there are many reasons for which it’s better. The Xbox is much more noisy – even when installing the games to hard drive – to begin with, and then the PlayStation 3 plays BluRay, does not need line-of-sight for the remote control, does not require special cables to charge the wireless controllers. I think the system is generally better, although Xbox got more flak than it should, at least from the people I know around here, for the above-noted problems.

(Mis)feature by (mis)feature porting

There is one thing that doesn’t upset me a half as much as it should, likely because I’m almost never involved in end-user software development nowadays (although it can be found in back-end software as well): feature-by-feature “ports” (or rather, re-implementations).

Say there is a hugely-known, widely-used proprietary software, and lots of people feel like that a free alternative to that software is needed (which happens pretty often, to be honest, and is the driving force for the Free Software movement, in my opinion); you have two main roads, among a gazillion of possible choices, that you can take: you try to focus on the the use cases for the software or you can re-implement it feature-by-feature. I learnt, through experience, that the former case is always better than the latter.

When I talk about experience, I don’t mean the user experience but rather the actual experience of coding such ports. A long time ago, one of my first projects with Qt (3) under Linux was a try at porting the ClrMame Pro tool (for Windows) — Interestingly enough, I cannot find the homepage of the tool on Google, I rather get the usual spam trap links from the search. My reason to try re-implementing that software, at the time, was that I used to be a huge MAME player (with just a couple of ROMs) and that the program didn’t work fine under Wine (and the few tries I took at fixing Wine didn’t work out as well as I’d have hoped — yet I think a few of my patches made it through to Wine, although I doubt the code persists today).

Feature-by-feature porting is usually far from easy, especially for closed-source applications, because you try to deduce the internal working from the external interface (be it user interface or programming interface) and that rarely works out as good as you would like. Given this is often called reinventing the wheel, you should consider this like trying to reinvent the wheel after being given just a cart without wheels, looking at the way they should connect. For open source software, this is obviously easier to do.

Now, while there are so many software out there that make the same mistake, I’d like to look first at one that, luckily, ended up breaking off from the feature-by-feature idea and started working on a different method, albeit slowly and still being tied too much, in my opinion, to the original concept: Evolution. Those who used the first few versions of Evolution might remember that it clearly, and unbearably tried to imitate, feature-by-feature, Microsoft Outlook 2000. The same icon pane on the left-side, same format for the contacts’ summary, and same modules. The result is … not too appealing, I’d say. As I said the original concept creeps in today as well, as you still have essentially the same modules: mail, contacts, calendar, tasks and notes, the last two being those that I find quite pointless today (especially considering the presence of Tomboy and GNote). A similar design can be found in KDE’s Kontact “shell” around the separated components of the PIM package.

On the other hand, I’d like to pick up a different, proprietary effort: Apple’s own PIM suite. While they tend to integrate their stuff quite tightly, they also have taken a quite different approach for their own programs: Apple’s Mail, iCal and Address Book. They are three different applications, they share the information they store, one with the other (so that you can send and receive meeting invites through Mail, picking up the contacts’ emails), but they have widely different, sometimes inconsistent interface when you put one near the other. On the other hand, each interface seem to have its sense, and in my opinion ends up faring pretty well on the usability scale. What it does not try to do is what Microsoft did, that is forcing the same base graphical interface over a bunch of widely different use cases.

It shouldn’t then surprise that the other case of feature-by-feature (or in this case, misfeature-by-misfeature) port, is again attached to Microsoft from the “origin” end: OpenOffice. Of course, it is true that the original implementation for it comes from a different product (StarOffice) that didn’t really have the kind of “get the same” approach that Evolution and other projects have taken, I guess. On the other hand, they seem to keep going that way, at least to me.

The misfeature that brought me to write this post today is a very common one: automatic hyperlink transformation of URLs and email addresses… especially email addresses. If I consider the main target result from OpenOffice, I’d expect printed material (communications, invoices, and so on) should be up on the top. And in that kind of products you definitely don’t need, nor want, those things hyperlinked; they would not be useful and would be mostly unusable. Even if you do produce PDFs out if it (which supports hyperlinks), I don’t think that just hyperlinking everything with an at-character on it would be a sane choice. As I have been made aware, one of the most likely reason for OpenOffice to do that is that… Word does. But why does Word in the first place?

It’s probably either of two. At the time of Office 2000 (or was it 97? I said 97 before on identi.ca, but thinking for a bit, it might have been 2000 instead), Microsoft tried to push Word as a “web editor”: the first amateur websites started to crop around, and FrontPage was still considered much more top-level than Word; having auto-hyperlinking there was obviously needed. The other option is about the same time, when Microsoft tried to push Word as … Outlook’s mail editor (do you remember the time when you received mail from corporate contacts that was only an attached .doc file?).

So in general, the fact that any other software has a feature does not really justify implementing some feature on a new one. Find why the feature would be useful, and then consider it again.