Know thy competitor

I don’t like the use of the word “enemy” when it comes to software development, as it adds some sort of religious feeling to something that should only be matter of business and pragmatism. Just so you know.

You almost certainly know that I’m a Free Software developer. And if you followed me for long enough, you also most likely know that I’ve had my stint working with Ruby and Rails, even though I haven’t worked in that area for a very long time and, honestly, I’d still prefer staying away from that.

I have criticised a number of aspects of Rails development before, mostly due to my work on the new Ruby packaging framework for Gentoo that has shown the long list of bad practices still applied in developing Ruby extensions designed to be used by Rails applications. I think the climax of my disappointment with Rails-related development was reached when I was looking at Hobo which was supposed to be some sort of RAD environment for Rails applications, and turned out to complicating the use of non-standard procedure way more than Rails itself.

It could then be seen as ironic that, after all this, my current line of work includes developing for the Microsoft ASP.NET platform. Duh! As for why I’m doing this: money is good, and the customer is a good one, and lately I’ve been quite in need for stable customers.

A note here: I’m actually considering moving away from development as main line of work and get into the “roaming sysadmin” field. Out of the recent customers I got, development tends to take too much time, especially as even the customers themselves are not sure how they want things done, and are unable to accept limitations and compromises for most of the situations. System administration at least only require me to do the job as quickly as possible and as neat as possible..

This is not the first time I have to work with Microsoft technologies; I spent my time on .NET and Mono before, and earlier this year I had to learn WPF and I’ve always admitted when Microsoft’s choice are actually better than some Free Software projects’ ones. Indeed, I like the way they designed the C# language itself, and WPF is quite cool in the way it works, even though I find it a bit too verbose for my tastes.

But with ASP.NET I suddenly remembered why I prefer Free Software. Rails and Hobo come nowhere near the badness of ASP.NET! Not only the syntax for the aspx files, which is a mix of standard html and custom tags, is so verbose that it’s not even funny (why oh why every tag need to contain runat="server", when no other alternative is presented, is something I’ll never understand), but even the implementation of the details in the backend is stupid.

Take for instance the Accordion “control”, which is supposed to allow adding collapsible panels to a web page without having to play with JavaScript manually, so that the page does not even have to carry the content of the panes when they are not to be displayed (kinda cool when you have lots of data to be displayed). These controls have a sub-control that is the AccordionPane, which in turn has a Header and a Content. I was expecting the “Accordion’s AccordionPane’s Header” would have a CSS class to identify it by default, so that you could apply styles over it quickly.. the answer is nope. If you want to have a CSS class on the header, you got to set a property on the AccordionPane’s control (which means once per sub-pane), so that it is exported later on. Lovely.

And let’s not forget that if you wish to actually develop an externally-accessible application, to test it on different devices than your own computer, you only have the choice of using IIS itself (the quick’n’dirty webserver that Visual Studio let you use cannot be configured to listen to something else than localhost)… and to make it possible to publish the content to the local IIS you got to run Visual Studio with administrator privileges (way to go UAC!).

Compared to this, I can see why Rails has had so much success…

A personal experience on why the FLOSS movement cripples itself

I have written recently about my standing on anti-corporate feelings and I have written a longer time ago speaking against ‘pirate’ software but today I feel like ranting a bit about the way the FLOSS people who still call proprietary software “illegitimate” are hurting the whole cause.

It so happens that a situation like the one I’m going to describe happened to me with more than a couple prospective clients. With one variation or another, but the basic situation is more or less the same.

I get called up by the prospective customer, that is looking for some kind of software solution, or mixed software-hardware solutions, they present me their need, and after a bit of thinking about it, I find there are two solutions: use Free Software, but usually requires fiddling with set-up, tweaking, and maintenance, or use a proprietary solution, with a high license cost but a smaller requirement for set-up or maintenance.

I usually present the pricing together with a pros/cons fact sheet, pointing out that whatever proprietary solution will rely solely and exclusively on the original vendor, and thus the first time that vendor does something that goes against your wishes or necessities, you’re left with paying for something you can’t make good use of. While this is usually something that is not easily forgotten, they are scared by the price.

I do my best to provide with options that are cheaper than the license of the proprietary software, so that there is a better chance for the Free alternative to be picked up. It’s not difficult given most of the problems I’ve been shown are solved by proprietary software that is very expensive. Also, it is in my personal interest to have them choose the Free Software solution: I get the money and I usually can release at least the fixes (or even better, the customisation) as Free Software, thanks to licenses such as GPL.

But here, most of the hopes get shattered: “You call it Free but we have to pay quite a bit of money for it… we can get the other cheaper, just use eMule”. At least here in Italy, honesty is a rare virtue, too rare a virtue for it to be “exploited” by Free Software. But why do I say that it’s a mistake of FLOSS developers of this is the case? Isn’t it just the doing of a business holder who cares not about legality and using unauthorized copies? Well, yes of course.

On the other hand, talking about illegitimacy and immorality of proprietary software, defending “piracy” (or unauthorized copies if you wish to call them that way), does not really help the cause, it actually gives them arguments such as “well, but even the guys developing that stuff defend using cracked copies of software, why should I pay you to create something anew when there is the program already?”.

As I said before, make sure the people around you understand why they should use Free Software, and that is not by telling them how bad copy-protection is, DRM is, and the “sins” of Windows. It’s by showing them that they have a price to pay to use that software both in direct monetary terms and in flexibility. And maybe more money would flow into the pockets of the Free Software developers that can make it not suck in the areas it currently sucks.

A visible case against bundled libraries

I wrote a lot about bundled libraries and why they are bad, but I usually stick with speaking about Free Software (or Open Source Software — yeah they are two different sets!). This time, let me explain you how they are bad for proprietary, binary-provided software as well.

For a long-winded story, I finally decided to give a try to Dropbox after hearing so much good about it. Thanks to Fabiano who wrote it, I got a rough ebuild for the nautilus-dropbox extension, so I cleaned it up a bit and installed.

The first step of the set-up process is… downloading the “proprietary daemon”, which turns out to include a number of otherwise Free Software, including, but not limited to, a number of Python extensions (and of course, a _whole copy of the Python interpreter_… they don’t even as far as trying to hide it, as all the symbols are visible in the ELF file!), Zlib 1.2.3 and a couple of D-Bus related libraries.

Okay nasty, but let’s leave at that for now, proprietary software can easily be crap, and I say that as somebody who also works on proprietary software for jobs from time to time, and I don’t expect them to have too much intention to clean up their code for what Free Software developers would call quality. But on the other hand, I expect they’d be trying to make it possible to run their code on the widest range possible of systems.

So I ignored for now the fact that it installs a proprietary package without being allowed to package it properly in the distribution, and I went on to configure it. Only three other steps are involved in the setup process: logging in with your email address and password, choosing your subscription type (I went for “Free”… while I am/was considering getting a higher version, it’s try first and decide later!), and deciding where to put your Dropbox folder.

Call me old-styled but I like my GUI-important folders on the desktop, so I wanted to put it there alongside “Downloads” and “Documents”.

Whoops! Window disappear as soon as I click the button to choose the placement. Smells funny, let me open the console and see:

/home/flame/.dropbox-dist/dropbox: symbol lookup error: /usr/lib/libtracker-client-0.7.so.0: undefined symbol: dbus_g_proxy_set_default_timeout

A quick check tells me two things: the symbol is part of libdbus-glib.so.2, which libtracker-client-0.7.so.0 links to properly, and Dropbox has a local copy of that library, which overrides my system copy. Unfortunately, their copy is older, so it lacks that symbol. Hilarity ensures.

Speaking about which, please do remember that most libraries don’t change SONAME when they make backward-compatible ABI changes, but the same is not true for forward-compatibility! So you can use software linked against an older library with a new one, but not vice-versa. It’s a nasty thing to forget.

I decide to simply hack this around and remove their copy of the library and try again… this time it works, but I lack all possible icons. The reason? I’m using an SVG-based icon set; the SVG renderer uses libxml2; libxml2 uses Zlib… they ship with Zlib 1.2.3 but my libxml2 is looking for the versioning from 1.2.4. Thanks to the fact that it’s just a plugin, this time it doesn’t kill the process (yes this is one of the few good reasons to use plugins), and I just lose ability to see icons.

I’ve opened a ticket for this with Dropbox, but I don’t really want to get my hopes up. What I’d love to see them do (still keeping a pragmatic mind — I won’t expect them to opensource all their implementation since they have been keeping that explicitly proprietary up to now):

  • make it possible to package the daemon from the distribution level, so that it’s not downloaded and installed per-user;
  • allow for a quick way to remove the bundled libraries, bringing in on the system the needed dependencies;
  • possibly make available a version of the daemon that does not link in the whole Python interpreter so that distributions can use their own system interpreter.

So if somebody from Dropbox is listening: please work with us distributors. You only have to gain from that. Better experience with your service on our users’ desktops is likely going to bring you more money, as more people are likely to buy a subscription. For what it’s worth, I would buy one myself, if it worked as a charm and not in the current hackish way on my systems.

Multi-architecture, theory versus practise

You probably remember the whole thing about FatELF and my assertion that FatELF does nothing to solve what the users supporting it want to see solved: multiple architecture support by vendors. Since I don’t want to be taken for one of those people who throw an assertion and pretend everybody falls in line with it, I’d like to explain somewhat further what the problem is in my opinion.

As I said before, even if FatELF could simplify deployment (at the expense of increasing exponentially the complexity of any other part of the operating system that deals with executables and libraries), it does nothing to solve a much more important problem, that has to be solved before you can even think of achieve multi-architecture support from vendors: development.

Now, in theory it’s pretty easy to write multi-architecture code: you make no use of any machine-dependent feature, no inline assembly, no function call outside the scope of a standard. But is it possible for sophisticated code to keep this way? It certainly often is not for open source software, even when it already supports multiple architecture and multiple software platforms as well. You can find that even OpenOffice require a not-so-trivial porting to support Linux/HPPA and that’s a piece of software that, while deriving from a proprietary suite (and having been handled by Sun which is quite well known for messy build systems), has been heavily hacked at by a large community of developers, and includes already stable support for 64-bit architectures.

Now try to be a bit imaginative, and find yourself working at a piece of proprietary code: you’ve already allocated money to support Linux, which is for many points of view, a fringe operating system. Sure it starts to increase in popularity, but then again a lot of those using it won’t run proprietary application nonetheless… or wouldn’t pay for them. (And let’s not even start with the argument but Chrome OS will bring a lot more users to Linux since that’s already been shown as a moot point). Most likely at this point you are looking at supporting a relatively small subset of Linux users; it’s not just a matter of difference between distributions, it’s just a way to cut down testing time; if it works on unsupported distributions is fine, but you won’t go out of your way for them; the common “enterprisey” distributions are fine for that.

Now, at the end of the nineties or at the beginning of the current decade, you wouldn’t have to think much in term of architectures either: using Linux on anything but x86 mostly required lots of effort (and lead to instability). In all cases you had to “eradicate” the official operating system of the platform, which meant Windows for x86, Solaris for SPARC and Mac OS for PPC; but while the former was quite obvious, the other still required more work since they were developed by the developer of the hardware in the first place.

Nowadays it is true that things changed, but how did they change exactly? The major change is definitely the introduction of the AMD64 (or x86-64 if you prefer) architecture: an extension to the “good” old x86 that supports 64-bit addresses. This alone created quite a few problems: from one side, since it allows for compatibility with the old x86 software, proprietary, commercial software didn’t flock to support it so fast: after all their software could still be used, even though it required further support from the distributions (multilib support that is), on the other side, multilib was before something that only a few niche architectures like mips looked out for, so support for it wasn’t as ready for most distributions.

And, to put the cherry on top, users started insisting for some software to be available natively for x86-64 systems, so that it would be more compatible, or at least shinier in their eyes; Java, Flash Player, and stuff like that had to be ported over. But here we’re reaching the point where theory (or, if you’re definitely cynical – like me – fantasy) clashes against practise: making Flash work on AMD64 systems didn’t just involve calling another compiler, as many people think, partly because the technologies weren’t all available to Adobe to rebuild, and partly because the code made assumption about the architecture it was working on.

Let be honest: it’s hypocrite to say that Free Software developers don’t make such assumption; it’s more like porters and distributions fixed their code long time ago; proprietary software does not have this kind of peer review, and they are, generally, not interested on it. It takes time, it takes effort and thus it takes money. And that money does not generally come out of architectures like Alpha, or MIPS. And I’m not calling out for the two of them without reason here: they are the two architectures that have actually allowed for some porting to be done for AMD64 before its time. The former was probably the previously most available 64-bit system Linux worked decently on (SPARC64 is a long story), and had code requirements very similar to x86-64 in term of both pointer size and PIC libraries. The latter had the first implementations of multilib around.

But again, handling endianness correctly (and did you know that MIPS, ARM and PowerPC exist in multiple endian variations?), making sure that the pointers are not assumed to be of any particular size, and never use ASM-only routines is simply not enough to ensure your software will work on any particular architecture. There are many problems, some of which are solvable by changing engineering procedures, and some others which are simply not solvable without spending extra time debugging that architecture.

For instance, if you’ve got an hand-optimised x86-only assembly routine, and a replacement C code for the other architectures, that code is unlikely to get tested as much as the x86 code, if your development focuses simply on x86. And I’m not kidding you when I say that it’s not such a rare thing to happen, also with Free Software projects. Bugs in that piece of code will be tricky to identify unless you add to your development process the test and the support for that particular architecture; which, trust me, is not simple.

Similarly, you can think of the strict aliasing problem: GCC 4.4 introduced further optimisations that can make use of strict aliasing assumption on x86 as well; before, this feature was mostly used by other architectures. Interestingly enough, the amount of strict-aliasing bug is definitely not trivial, and will cause some spurious bugs at runtime. Again, this is something that you can only fix by properly testing, and debugging, on different architectures. Even though some failures now happen on x86 too, this does not mean that the same problems happen, no more no less, on anything else. And you need to add your compiler’s bug to the mix, which is also not so simple.

And all of this is only covering the problems with the code itself, and comes nowhere near the problems of cross-compilation, it does not talk about the problems and bugs that can be in your dependencies’ code for the other architectures, or the availability of stable-interface distributions for those architectures (how many architectures is RHEL available for?).

After all this, do you still think that the only problem keeping vendors from supporting multiple architectures is the lack of a “Universal Binary” feature? Really? If so, I have some fresh air to sell you.

Distributions *are* the strength of Linux

People disagree, some people think that no operating system has any need for distributions, with all their difference and their central repositories that aren’t as central. But one of the thing that impress most the users who switch is, in many cases (at least that I could look at myself) the presence of distributions and the ability to install almost any software by simply looking it up in the package manager.

This said, when people think that overcomplex solutions are a perfect way to solve the issues that “vendors” have with distributing their software, you’re probably missing the point quite a bit. Instead of proposing changes in all the possible layers of the operating system stack, you should try to speak with with the distributors and see what you can do to make your software behave in such a way that they can lift the “send the software to the user” problem from you.

It’s a tremendously important point I’m making here: when you develop your software coming from a Windows background to work on Linux, youŕe probably making a huge amount of mistakes; the most common one is to assume that the directory to work on is the directory the program is in, or that the current working directory is the home of the user. Both differ between Windows and Linux. Fixing these minor issues is usually trivial, if you have access to the code, and if you’re willing to bend a bit around to accommodate the requests. In the case that icculus brought up, the proper solution is, generally, splitting the data from the engine, so that you can reuse the data between different architectures, and have a different engine for each architecture; or have a single huge download with all the architectures available, if they are, say, 10% over the size of the data.

The main point here is still that you have first to remember that distributions exist and that users like to rely on them (most of the time) and second to understand that neither the Windows way nor the OS X way applies to Linux. This doesn’t make Linux right and the other wrong, or vice-versa; they are three different worlds, and each one has its own good and bad side.

The biggest mistake in misunderstanding Linux for just another Windows version is providing a setup program, even worse a graphical setup program. If your software has no drivers to install, nothing to register itself into (there is no registry on Linux, after all), you most likely should not give that as the only option. First of all such a program would rarely tell you what’s going to do, and you’d also be going to run that with root privileges to install the stuff, so why should you trust proprietary software with root on your system? Of course if you’re just a “Joe User” you won’t care, you have no clue about that, but any decently skilled user would know that it’s never a good idea to trust any software you cannot control with root privileges on your box.

The second misconception is that some people seem to think that it’s a task for upstream of a project – be it a proprietary software vendor or a free software project – to provide binaries, installer and packages. This is the main reason why that silly FatELF idea is still tickling on some people. Well, let me say it once and for all it’s the distributions’ task to provide packages to the users!

Of course the problem is that distributions rarely can provide all the possible software in the world as package, may it be because their policy is to only allow Free Software (like Debian and Fedora) or for other reasons. In any case the solution is not to say “The distributions are the problem” but rather to wonder “Why are they not packaging my software?”. Of course when the problem is policy related to the license there is little to do, so you’re forced to rely on third party repositories (like RPM Fusion ) that don’t have such problems with policies. In general, a very little leeway for the distributions can go a great deal into making your software available to users.

All kind of projects who want to reach for users should listen to the distributors: that means that if a distributor complain about the way you (don’t) release software, for instance because you only use a “live” repository for the users to use, or about the way you make use of bundled libraries, you should most likely discuss with them a way to handle the situation; failing to do that is going to drive the distributor away (and then you’d probably be complaining that you’ll have to provide binaries for that distribution yourself). Unfortunately I’m quite sure that especially icculus have problems with stuff like that, given I’ve reported more than one Gentoo policy violation for ebuilds that come from icculus.

For proprietary software, this often goes not as much into the way of changing the development of the software but rather to change some distribution details: allow the developer to redistribute your software (so don’t use strange click-through download systems, don’t require the user to go a long way to find what it has to download); give a “raw tarball” option that the distribution can use as source for their packaging, be it binary packages, or source-based packages like Gentoo’s.

Move the packaging task to the packagers, they know it better.

And if you’re developing proprietary commercial software, you might want to approach some developers, and eventually give out some free licenses for them to play with so that they can package the software, and eventually give you feedback in what they would like for you to change. Most of the time, packagers are pretty pragmatic and will not be scared off by “helping proprietary software”; for instance in my overlay you can find some packaging for the Visual Paradigm Suite for which I bought a license a few weeks ago (I needed a working UML software for a job); it’s nowhere near Gentoo-ready, but I’ve not given up on it; since the Visual Paradigm customer support is also quite ready to answer to problems and suggestions, I’ve been sending them my feedback, both as user and as packager. Hopefully I might get to the point where the package is fine with Gentoo policies and I can add it to the main tree normally.

A similar situation happens with the EntropyKey software packaging since I was interested I got two of those and packaged it up; if upstream was interested in packaging this beyond their own support (I think they already have a Debian packager as part of the staff anyway), they could have created a developer program for distributors, and I’m pretty sure almost all distributors would have supported the ekeyd software in no time.

Yes, I am seeing all this situation from a packager point of view, but that’s because I definitely like this approach and instead of resent us for “not providing the stuff you want” or attacking distributions because “you have to make dozens of different packages”, try working with them. Like I said before, Ryan should stop keep inside his own little world where he can do whatever he wants and then expect people to bend at his needs, he should listen to the needs of distributors (which aren’t really so impossible!) and so should anybody who want to enter the Linux ecosystem as it is now.

And it’s definitely not only proprietary software that still doesn’t get this, Mozilla has had a hard time to get to work with distributors, OpenOffice still has such a hard time, Avidemux is a perfect example of how a package gets to ignore all the possible distribution requests (by still shipping a modified FFmpeg for instance).

Most of the time, the reasons why developers don’t want to make accommodations for distributions, are stuff along the lines of “I don’t see what difference does it make”… which is also the very reason why they have such a hard time to get their packaging together.

Backward free software advocacy

Another funny thing I noticed on the comments for my guest post about Free Software Fundamentalists is that there is a very strange conception of how to interact with proprietary software when you’re definitely forced to.

Quoting the comment on why you shouldn’t use proprietary software:

When you use proprietary software, you give them market share, which further funds their development, which widens the gap between them and their free competitors. It’s like buying then freeing slaves: you do it out of good intention, but unintentionally you empowers the slave traders, who enslave even more people. True, you can get a mostly free software, but you still empower the proprietary software.

Now, beside the fact that the particular author of that comment really needs some reality check done (comparing software and slavery first, and torture later, would show some serious lack of perspective on their part), one would expect that the problem is the “market share” thing. And indeed, I know that quite a bit of “Free Software Advocates” seem to be sustaining ideas like the Pirate Party, and other kind of “freedom no matter what” activities. Don’t get me wrong, I can understand them to a point, but I’m not really agreeing with them fully.

I can understand very well the point of “civil disobedience” related to the non-availability of some kind of content or software, or so on. As I said before I also download, unauthorized, Bill Maher’s show since it’s unavailable in Italy (for no good reason I can think of). On the other hand I’m not proud of that and, given the choice of paying to watch it, I’d be definitely fine with paying for it.

What I really can’t get my head behind is the idea that, to avoid giving funds to proprietary software developers, you should copy, crack, or otherwise hinder the distribution of that software. Sorry, but respecting copyright is what the Free Software movement has been basing itself on, thanks to the GPL. Now, I know that Stallman now declared that the GPL was a “workaround” and that getting rid of copyright altogether is the way to go… I’m quite sure I don’t agree; we do need a reform in copyright almost everywhere, but I still don’t think that it’s going to help free software to kill copyright entirely.

Piracy is definitely not the way to go, in my view. Of course I’m not the kind of person who says “piracy is bad so get rid of all the tools allowing it”, because I do see that a lot of the tools actually used for piracy are used for very legit cases as well: being able to decrypt and rip a DVD does not always mean that you are going to distribute it illegally; you might want to have it available on an HDD-based set-top-box on your TV; you might want to put it on your iPod or PSP, or whatever, and so on. The same goes for CD.

Piracy is, at many levels, detrimental to Free Software; let me give you an example, getting back to the family unit I described before where pirated software was the norm, even when it only required functions well covered by free software like Gimp, Inkscape or OpenOffice. Now, in their case I was able to bring them on board with the free alternatives based on the fact that, obviously, pirated software often is a truck loaded with viruses and other kind of malware. If it wasn’t for that, their reasoning would have been “If I have to choose between a mediocre software that is available for free, when I could have, as free as that, software that costs lots of money and is thus obviously better?”.

Now, any half-decent computer geek knows very well that “costs lots of money” doesn’t necessarily mean “it’s good” (Windows, anyone?). On the other hand, normal people almost always reason in that sense (and can be seen in so many ways it’s not even funny, be it software, hardware, or stuff that has nothing to do with computers); to ignore this is silly if your target is advocating free software. So you got to find another way to explain it to them.

The usual argument about the philosophy comes up to a point; especially when you sanction piracy, this really starts to be watered down. The argument about lock-ins also doesn’t really count with “commoners” since the lock-in will only mean they’ll keep pirating the same software, and will make sure that all the computers they have have the same pirated software on them. (It would be much better if software companies really tried to struck down heavily on piracy).

What remains is simply this: make sure that the Free Software gets better, and better, and better than proprietary software. To do that, though, you need to get out of the mental shelter of “it doesn’t matter if it’s mediocre, you have to prefer it”. And now please let me cover my ass about one very likely rebuttal that I have seen before: “Well, to me it’s more important that the software is free than it is perfect”; it’s a valid point for you. And I’m definitely not going to tell you “use that proprietary software, it’s better!”.

On the other hand if you wish to force suggest other people to use Free Software, you should learn that most of the users out there care first to get their work done, and then whether the software is free or not. Those who use computers to do any kind of job not directly involved with development will use whatever tool allows them to get paid at the end of the month (and somebody compares that to torture and war? oh my…); those who use a computer just for entertainment will care even less about what they are using, since they don’t even expect reliability out of it (mostly because of Microsoft’s past operating systems, I guess).

Guess who’s really widening free software’s reach? Advocates who have lost contact with reality and the masses of users out there? Or me and the rest of the pragmatic guys who work hard every day to create more and better free software?

*Note: I have already said it before but I want to make it explicit once again (with the “right tone” for the issue). I know that a lot of developers out there don’t give a f♥♥k about “widening free software’s reach” and would most likely prefer that “the masses of users out there” stood the f♥♥k away from them. To them I’m not really saying anything, they are free to do whatever they prefer. I’m simply upset by those who declares themselves “advocates” or “evangelists” and then behave in that way.*

Proprietaryware all around us

In a guest post at Boycott Boycott Novell I’ve written about my frustration with so-called “Free Software Fundamentalists”. My main problem with them is that they keep insisting in not using proprietaryware, at all, rather than improving Free Software till it actually becomes the norm.

Now, one thing that might be difficult to understand is that, no matter how hard you try, it’s near impossible to not use any kind of proprietary software nowadays. And while I’m one who fights with all his force to make sure that we have Free Software alternatives in such a state that it can be used in as many things as possible, I don’t try to fight the presence of the other kind of software. I might argue which one between their and my methods is the one that can reach the goal better, but that’s not what I wanted to write about right now.

For now I just wanted to note how impossible it is to not rely at least in part in proprietary, closed-source software (this also ties with an older post of mine about updates):

  • do you have a cellphone? unless you’re running stuff like OpenMoko, I doubt you have it pure free software, since even Nokia’s N900 has quite a few proprietary components;
  • okay so cellphones are evil, but do you have a standard phone? remember: if it has an address book it has a firmware on it (and even if it doesn’t it might have a firmware to manage some functions);
  • do you have a VCR? a DVD player? a DivX player? Is any of that running on a free software firmware?
  • cable or satellite TV? Sky (UK and Italy) definitely have firmware in their decoders (there is also some documentation about GPL violations in satellite decoders);
  • not even that, a simple TV? You know, not only they have firmware now, but they also come with an upgradable firmware (at least, my Sony Bravia does); some TVs also have free software on them (Sharp I happen to remember), although I highly doubt they have no proprietary bits in them; heck, remote controls have firmware as well, at least the programmable ones;
  • any game console? none that I know run on pure free software;
  • computers usually have proprietary BIOS, but coreboot is working to replace that; and at the same time we know of many projects working on replacing firmware for wifi cards (although I still can’t understand; why replacing a wifi card’s firmware, but not the SATA controller firmware?); laptops, on the other hand have a lot of components with firmware on them; for instance I remember Lenovo laptops having firmware to control the fans and similar subsystems; and I’m pretty sure “smart batteries” have firmware as well; UPSes have firmware; external drive enclosures have firmware (and there, replacing the firmware with some free software would definitely be useful, given how many bugs the Genesys Logic firmware has!); even keyboards have firmware, at least Apple’s and probably Logitech’s as well; bluetooth dongles have firmware; harddrives and SSDs have firmware;
  • so okay, you use no external hard drive, a motherboard supported by coreboot and so on, your computer is fine; what about the monitor connected to it?
  • and finally, if you’re not using computers (so what are you doing advocating free software?); are you using a modern microwave oven, dishwasher or washing machine? While there are still lots of those appliances that use no computer-like parts, and thus no firmware, quite a lot of the new ones use firmware which is proprietary; I actually find those quite obnoxious because, for instance, you cannot self-repair your washing machine if the mainboard fries up; the firmware (proprietary) has to be flashed in; and to make it even more impossible, they have to flash it with a special dongle, and a special phone, with UMTS connection;

So really, are you using any proprietaryware at all? If so, stop harassing my freedom of choice for a supposedly higher freedom.

Who Pays the Price of Pirated Programs

I have to say sorry before all, because most likely you’ll find typos and grammar mistakes in this post. Unfortunately I have yet to receive my new glasses so I’m typing basically blind.

Bad alliteration in the title, it should have been “pirated software“ but it didn’t sound as good.

I was thinking earlier today who is really paying the price of pirated software in the world of today; we all know that the main entity losing from pirated software is, of course, the software’s publisher and developer. And of course most of them, starting from Microsoft, try their best to reverse the game, saying that the cost is mostly on the user itself (remember Windows Genuine Advantage?). I know this is going to be a flamethrower, but I happen to agree with them nowadays.

Let me explain my point: when you use pirate software, you end up not updating the software at all (‘cause you either have no valid serial code, or you have a crack that would go away); and this include security vulnerabilities, that often enough, for Windows at least, lead to virus infecting the system. And of course, the same problem applies, recursively, to antivirus software. And this is without counting the way most of that software is procured (eMule, torrents, and so on… — note that I have ethical uses of torrent sites for which I’d like at least some sites to be kept alive), which is often the main highway for viruses to infect systems.

So there is already an use case for keep legit with all the software; there is one more reason why you, a Linux enthusiast, should also make sure that your friends and family don’t use pirate software: Windows (as well as Linux, but that’s another topic) botnets send spam to you as well!

Okay, so what’s the solution? Microsoft – obviously – wants everybody to spend money on their licenses (and in Italy they cost twice as much; I had to buy a Microsoft Office 2007 Professional license – don’t ask – in Italy it was at €622 plus VAT; from Amazon UK it was €314, with VAT reimbursed; and Office is multi-language enabled, so there is not even the problem of Italian vs. English). I don’t entirely agree with that; I think that those who really need to use proprietary software that costs, should probably be paying for it, this will give them one more reason to want a free alternative. All the rest, should be replaced with Free (or at least free) alternatives.

So for instance, when a friend/customer is using proprietary software, I tend to replace it along these lines: Nero can be replaced with InfraRecorder (I put this first because it’s the least known); Office with the well-known OpenOffice and Photoshop with Gimp (when there are no needs for professional editing at least).

The main issue here is that I find a lot of Free Software enthusiasts who seem to accept, and foster pirate software; sorry I’m not along those lines, at all. And this is because I loathe proprietary software, not because I like it! I just don’t like being taken for an hypocrite.

After some time with Snow Leopard

You probably know that, as much as I am a Linux user, I’m an OS X user as well. I don’t usually develop for OS X, but I do use it quite a bit, even though my laptop broke last March, I bought an iMac to replace it (and now I also have my MacBook Pro back, although with the optical unit not working still; I’m now tempted to get a second harddrive instead of an optical unit, I can use the iMac’s DVD sharing for that instead).

And since I’m both a developer and an user, when the new release of OS X, Snow Leopard, was finally published, I ordered it right away. Two weeks into using the new version of OS X, I have some comments to say. And I’m going to say them here because this is something that various Free Software projects should probably learn from too.

The first point is nothing new, Apple already said that Snow Leopard is nothing totally new, but it’s rather a polished version of Leopard… with 64-bit under the hood. The 64-bit idea is not new to Linux and a lot of distributions already support it, and when it’s available, almost all system software uses it; there are still a few proprietary pieces that are not ported to 64-bit, especially for what concern games, and software like Skype, but most of the stuff is good for us so we really have nothing new to learn from OS X in that field.

I was expecting the Grand Central Dispatch thing to be a rebranded OpenMP sincerely (by the way, thanks to whoever sent me the book, it hasn’t arrived yet but I’ll make sure to put it to good use in Free Software once I have it!), instead it seems to be a totally different implementation, which Apple partly made available as open source free software (see this LWN article ); I’m not sure if I’m happy about it or not, given it’s already another implementation of an already-present idea. On the other hand, it’s certainly an area were Free Software could really learn; I don’t think OpenMP is that much used outside of CPU-intensive tasks, but as the Apple guys shown in their WWDC presentation, it’s something that even something like a mail client could make good use of.

I still have no idea what technique QuickTime X is using for the HTTP-based streaming, I’ll find out one day though, for now I’m still working on the new implementation of the lscube RTSP parser that should also support the already-present HTTP proxy passthrough; if it uses the same technique, that’s even better!

In the list of less-advertised changes, there are also things like better Google support in the iCal and Address Book: now for instance you can edit the Google calendars from inside the iCal application, which is kinda cool (all the changes are automatically available both locally and on Google Calendar itself), and you can sync your Address Book with Google Contacts. The former is something that supposedly should work with Evolution as well, although I think they really really have a long way to go before it works as well, and that’s not to say that iCal integration works perfectly… at all!

The latter instead is a bit strange, I already had the impression that Google Contacts is some very bad shit (it doesn’t store all information, the web interface is nearly unusable, and so on), but when I decided to enable the “Sync with Google” option in Address Book I probably made a big mistake: first the thing created lots of duplicates in my book, since I uploaded a copy of all them with the cellphone some time ago, and some entries were seen as duplicated rather than being the same thing (mostly for people with an associated honorific like “Dott.” for my doctors).; this is quite strange because the vCard files should have an Unique ID just for that reason, to make sure that they are not duplicated if moved between different services. In addition, the phone numbers went messed up since they added up (in Apple’s Address Book I keep them well edited – +39 041 09 80 841 – the Nokia removes the spaces, and it seems like Google Contacts sometimes drops the country code for no good reason at all).

Interestingly enough, though, while Leopard was known for the Mobile Me support, Snow Leopard adds quite a few more options for syncing data, probably because Mobile Me itself wasn’t really that much of a good deal for most people; it still didn’t support my Nokia E75 natively (but “my” plugin worked — a copy of the E71 plugin by Nokia with the phone name edited), and it doesn’t seem to support a generic SyncML provider (like Nokia’s Ovi service), but there is for instance a new “CardDAV” entry in the Address Book for instance; I wonder if it’s compatible with Evolution’s CalDAV-based address book), if so I might want to use that, I guess.

While the Apple showcase of Snow Leopard was aimed at criticising Microsoft’s release of Windows Vista with all the related changes in the interface, I wouldn’t be surprised if, when deciding how to proceed with the new version, they also counted in the critiques against KDE 4’s release. I hope that Gnome 3 won’t be anything like that, and would rather follow Apple’s approach of subtle, gentle changes, although I won’t count on it.

At any rate, the experience up to now was quite nice, nothing broke heavily, even Parallels Desktop worked fine after the update, which was actually surprising to me since I expected the kernel-level stuff to break a part with the update. I wish Linux would be as stable sometimes. But bottom-line, although with a few problems I still love Free Software better.

Mixing free software and proprietaryware

You probably know already, if you follow my blog, that I have some quite pragmatic views when it comes to software, and while I despise proprietary stuff, I also do use quite a bit of proprietary software and, most importantly, I pay for that.

For good or for bad, mot of my paid work also involves working in proprietary software, may it be supporting QuickTime RTSP extensions in feng or developing software that runs on Windows (and OSX and Linux). For this reason, as I said before I also use Mono since that allows me to reduce the amount of proprietary software I have to deal with.

But because working on proprietary software, for somebody used to the sharing and improving of free software, is quite difficult, I also apply one extra rule: when the customer wants a closed-source proprietary software for what concerns the core business logic, I try to write asmuch code as possible generic, so that it can be split in LGPL-licensed libraries. This way I can release part of the code I write as free software without going against my customers’ requests, and not costing them anything more.

And thanks to the fact that there already are LGPL-licensed libraries to do some of the work out there, this also simplifies my life. Well, at least when they work and I don’t need to spend a lot of time to make them work. Unfortunately this is the case sometimes, especially when I have to package for Linux something that was probably never tested or intended to be used on Linux. So I wish to thank Jo Shields for helping me out the other night about packaging libraries that don’t provide, by themselves, a strongname.

So, at the end, I still think there is space for different license in different contexts; especially, while LGPL is a compromise from pure free software philosophies, it often allows you to free code that wouldn’t be freed when given a single choice (between GPL and proprietary).

On the other hand, I have to rant a bit about the price of proprietaryware in Italy at least. For work I needed a license of Microsoft Office 2007 Professional (don’t ask, it’s a long story). In Italy, the price was €622 plus VAT; on Amazon UK, the same product (I don’t care about language, but the code seems to work fine with multi-language Office by the way) was up for an equivalent €314 plus VAT (in the former case, VAT needs to go through the tax system, in the latter, it’s directly reimbursed by Amazon, so it’s also faster to deal with). Now I’m curious to see if the same will hold true for Windows 7 licenses (yes I’m afraid I’m going to have to deal with that as well for my jobs) in the next months. Kudos to Apple at least, the update to Snow Leopard was pretty cheap, was sent right away (thanks to my passing through the business store), and really doesn’t seem to break anything on my systems at least.

But still, I love working on Free Software, at least there, I can fix the stuff that fails myself, or at least prod somebody to, most of the times!