Why I’m using Mono

In the past week I’ve written a bit about some technical problems with Mono, very far from the political, ethical, and theoretical problems that Richard Stallman talked about, and most people seems to have a problem with. I have been asked why am I using Mono at all, given I’m a Linux developer, so I’ll try to summarise the answer here.

The main issue is that I need to work: bills, hardware and all that stuff needs to be paid for, and trust me, at least here pure Free Software development does not pay enough; the part that brings in more money for the same effort is custom proprietaryware; some of it is not even tremendously unethical because it’s customizations that stops directly at my customer, rather than being distributed further down the way. This kind of software needs to work on proprietary operating systems as well, and that includes both Windows and OS X; Mono is a good choice for this, in my opinion.

Now of course you could be reading this wrong and say that I’m giving reasons for Mono not to be used for free software; on the other hand, it happens that sometimes I have to be writing free software that works on multiple operating systems, and that also is simpler to do with Mono. Case in point, I’ve started looking into writing a package that can be used to automatically back up the games’ savedata from PSP (PlayStation Portable) for me and a few friends of mine mostly (but of course would be released as free software); since they use Windows (and OS X), I’ll be writing it in Mono again.

Now, the view that Windows and OS X are important for software is somewhat debatable; I know lots of people complained about KDE wanting to port to Windows; I for sure complained that KDE went with Windows portability as a major priority (which is why CMake was selected, after all), but of course a pragmatic view shows that yeah sometimes it’s better to keep in mind that lots of users will use free software on those operating systems as well. This actually works fine, because a friend of mine who’s now using a lot of free software under Windows, including Pidgin and Firefox of course, will have a lot less problems to migrate to Linux one day (and one day he will, I’m sure) than those who are still stuck with Microsoft’s own messenger and Internet Explorer.

Also, I’m sure somebody will be saying that Qt 4 and KDE support Windows as well, so what’s the point of using Mono. Well… while GCC is improving their Windows/PE support with each version (and I know one person who at least has been working a lot on that subject), there are quite a few important features that it’s still lacking; so even if you look around, most of the free software, including KDE, that works under Windows, tend to be compiled with the Microsoft Visual C++ compiler. Which means you rely heavily on another huge piece of proprietaryware (and indeed, a pretty bad one at that).

I know it’s probably heavily a matter of taste, but I prefer working with Mono than having to deal with either the GCC problems or the Visual C++ compiler.

Software sucks, or why I don’t trust proprietary closed source software

If you follow my blog since I started writing, you might remember my post about imported libraries from last January and the follow up related to OpenOffice; you might know I did start some major work toward identifying imported libraries using my collision detection script and that I postponed till I had enough horsepower to run the script again.

And this is another reason why I’m working on installing as many packages as possible on my testing chroot. Now, of course the primary reason was to test for --as-needed support, but I’ve also been busy checking build with glibc 2.8, and GCC 4.3, and recently glibc 2.9 . And in addition to this, the build is also providing me with some data about imported libraries.

With this simple piece of script, I’m doing a very rough cut analysis of the software that gets installed, to check for the most commonly imported libraries: zlib, expat, bz2lib, libpng, jpeg, and FFmpeg:

rm -f "${T}"/flameeyes-scanelf-bundled.log
for symbol in adler32 BZ2_decompress jpeg_mem_init XML_Parse avcodec_init png_get_libpng_ver; do
    scanelf -qRs +$symbol "${D}" >> "${T}"/flameeyes-scanelf-bundled.log
done
if [[ -s "${T}"/flameeyes-scanelf-bundled.log ]]; then
    ewarn "Flameeyes QA Warning! Possibly bundled libraries"
    cat "${T}"/flameeyes-scanelf-bundled.log
fi

This checks for some symbols that are usually not present without the rest of the library, and although it gives a few false positives, it does produce interesting results. For instance while I knew FFmpeg is very often imported, and I expected zlib to be copied in every other software, it’s interesting to know that expat as much used as zlib, and every time it’s imported rather than used from the system. This goes for both Free and Open Source Software and for proprietary closed-source software. The difference is that while you can fix the F/OSS software, you cannot fix the proprietary software.

What is the problem with imported libraries? The basic one is that they waste space and memory since they duplicate code already present in the system, but there is also one other issue: they create situations where even old, known, and widely fixed issue remain around for months, even years after they were disclosed. What preserved proprietary software this well to this point is mostly related to the so-called security through obscurity. You usually don’t know that the code is there and you don’t know in which codepath it’s used, which makes it much harder for novices to identify how to exploit those vulnerabilities. Unfortunately, this is far from being a true form of security.

Most people would now wonder, how can they mask the use of particular code? The first option is to build the library inside the software, which hides it to the eyes of the most naïve researchers; by not loading explicitly the library it’s not possible to identify its use through the loading of the library itself. But of course the references to those libraries remain in the code, and indeed most of the times you’ll find the libraries’ symbols as defined inside executables and libraries of proprietary software. Which is exactly what my rough script checks. I could use pfunct from the seven dwarves to get the data out of DWARF debugging information, but proprietary software is obviously built without debug information so it would just waste my time. If they used hidden visibility, finding out the bundled libraries would be much much harder.

Of course, finding which version of a library is bundled in an open source software package is trivial, since you just have to look for the headers to find the one defining the version — although expat often is stripped of the expat.h header that contains that information. On proprietary software is quite more difficult.

For this reason I produced a set of three utilities that, given a shared object, find out the version of the bundled library. As it is it quite obviously doesn’t work on final executables, but it’s a start at least. Running these tools on a series of proprietary software packages that bundled the libraries caused me some kind of hysteria: lots and lots of software still uses very old zlib versions, as well as libpng versions. The current status is worrisome.

Now, can somebody really trust proprietary software at this point? The only way I can trust Free Software is by making sure I can fix it, but there are so many forks and copies and bundles and morphings that evaluating the security of the software is difficult even there; on proprietary software, where you cannot be really sure at all about the origin of the software, the embedded libraries, and stuff like that, there’s no way I can trust that.

I think I’ll try my best to improve the situation of Free Software even when it comes to security; as the IE bug demonstrated, free software solutions like Firefox can be considered working secure alternatives even by media, we should try to play that card much more often.

3Com really needs better interface programmers

I’m almost tempted to send my resumé to them, I’m sure I can do better than whoever designed the interface of my 3Com router.

Don’t get me wrong, the router, at an hardware level, is very good. It works pretty well under heavy load, I was able to crash it just once when I tried multiple wireless transfers, but beside that it was pretty stable.

The problems are all on a software level, firmware level, which is what bothers me more, as if they actually opened their firmware I would probably stick with them. Unfortunately as far as I know this type of router is not yet supported by Linux in any way, which drives me crazy.

I blogged about this a little short of two years ago , the problem increased recently because I changed my network graph. The configuration interface of the router does not allow to enable port forwarding (or, as they call it, virtual servers) if the target IP is not in the same /24 network of the router’s IP. This ignoring whatever netmask setting the router has set.

In my case, I ended up creating a 172.16.0.0/16 network here. Why? Because the /28 I was using before dried up, because of another bug in the software of the router. Although leases haven’t been confirmed, the router’s DHCP server will “reserve” the IPs already assigned to a mac address, and I couldn’t find a single way to let it release those leases. If you are not quick by mind on network calculation, a /28 network mask mean there are

(2^(32-28))–2 = (2^4)–2 = 16-2 = 14

IPs available for hosts.

As you can see from this rough schema I have quite a few devices connected on the wireless network. And as it happens, I do support work on Windows systems from time to time, and all the times one of the tasks I need to perform is connecting laptops to the wireless network to make sure they are set up to connect to Internet on their own. Add to that a few PSP that friends of mine bring along, and you can guess that the DHCP address space disappeared pretty easily.

Beside from the /16 network there is a /24 network that is forwarded to Enterprise. I actually was thinking of forwarding a while /17 or /18 for safety, and to avoid mixing 192.168 and 172.16 addresses, but I haven’t gotten around fixing that yet. The reason why I have some address space reserved and redirected to Enterprise is that this way I can have a special network for just the laptop, for iSCSI, NFS and Samba, when I’m working on Windows or moving stuff around on OSX.

Okay so let’s return to the 3Com router now. As I said the router, that has IP 172.16.0.1 does not allow me to redirect ports to the addressed of the DHCP-allocated devices (which, just to make sure, I set to 172.168.1.0/24 — again I cannot let DHCP take more than a /24 range!). And I DHCP-allocate basically anything. Why? Because it’s easier, if I change the network setup, to re-run the DHCP clients on the various devices, rather than having to set them up from scratch again, there are quite a few of them. This meant, up to now, that I had no forwarding at all for no service at all.

Today, by chance, I found a way to get around this. I was booted in Windows XP (to play Empire Earth), and I noticed that the router’s UPnP interface was being identified by Windows, and I could manage it from there. I know a bit about UPnP because, when I had a D-Link router, I already tried writing a simple software for managing port forwarding. I checked and… magically, the router allows me to redirect ports to any IP address, if I do ask it to via UPnP.

Unfortunately, as far as I know, the only work going on regarding UPnP under Linux is for mediaserver devices (including MediaTomb for the PS3), and not port forwarding. I know Azureus supports redirecting port and, if I recall correctly, KTorrent had something too, lately, but I don’t think there is an easy to use library to manage that just yet. If there was, I’d probably be working on a configuration interface myself. I think it should be really useful, and it would allow to set up services so that ports are automatically forwarded on request on the right IP, so not only I wouldn’t have to reconfigure the clients to get the new IP (thanks to DHCP) but I wouldn’t have to tell the router where to find the services either.

Of course, I can see there are a few downsides to this approach, mostly security-related, but I don’t think it’s less or more of an issue whether there is a library that helps implementing this on Free Software or not.

And soon enough I’ll be hitting a new limit of the software in the router. The MAC address table for wireless connection control is limited to 32 entries, not commented. I will have more than 32 allowed elements soon. And I won’t know which entries refer to old laptops I fixed, and which ones refer to devices that I might take care of again soon.

I’m sincerely displeased to see that even a huge and trusted manufacturer like 3Com has very bad firmwares. I wish I could find a router that has hardware as capable as 3Com’s, but a firmware flexible enough to provide IPv6 through a broker, for instance, or that allows me to write my own connection filters.

3Com, please open your firmware! You’ll make all your consumers happy, and they’ll return to you! If you were to release a router that has the same hardware capabilities as mine, with a much more open firmware, and 802.11n wireless, I’d be buying it right away!

Fedora 9 and VirtualBox

For a job I have been hired to do now, I have the need to keep a virtual machine with the development environment. The reason for this is that there are a few quirks in that environment which caused me some headaches before.

As it’s not like the other virtual machine (on the laptop), requiring a perfect Windows support, I decided to go with VirtualBox again. Which by the way forces me to keep GCC 4.2 around. But oh well, that’s not important, is it?

The choice of distribution inside wasn’t difficult, I thought. I didn’t want to rebuild system on a vbox, no Gentoo. I avoided Debian and Debian-based, after the debacle about OpenSSL I won’t trust them easily. Yes I am picking on them because of that, it’s because it was a huge problem. While I found openSUSE nice to install on a computer last month, it didn’t suit well my needs I thought, so I decided to go with Fedora 9.

I used Fedora a couple of times before, it’s a nice development environment when I need something up quickly and cleanly. Unfortunately I found Fedora 9 a bit rough, more than I remembered.

I wasn’t planning on giving it Internet access at first, because of my strange network setup (I will make a schema as talking with Joshua made me think it’s pretty uncommon); but then the package manager refused to install me the kernel-devel package out of the DVD, it had to use the network. So I tried to configure the network with a different IP and netmask, but this didn’t work cleanly either, the network setting interface seemed to clash with NetworkManager. I admit I didn’t want to spend too much time looking for documentation, so I just created a “VBOX” entry on NetworkManager which I’m selecting at each boot.

After this was done, I updated all the packages as the update manager asked me to do, and tried to install the new kernel-devel. This was needed by the VirtualBox guest tools, which I wanted to install to have the “magic” mouse grab. But VirtualBox refused to do so, because Fedora 9 is shipping with a pre-release Xorg 1.5 that they don’t intend to support. Sigh.

I’m not blaming Fedora here. Lots of people blamed them for breaking compatibility with nVidia’s drivers, but they did give enough good reasons to use that particular Xorg version (I know I read a blog about this, but I don’t remember which Planet it was in, nor the title). What I’m surprised of is that VirtualBox, although being vastly opensourced, seems to keep the additions closed-sourced, which in turn cause a few problems.

Different Linux distributions have different setups, some might use different Xorg versions, other different kernel building methods, and I sincerely think the answer is not LSB. Interestingly, you can get the VMware mouse and video drivers directly from Xorg nowadays (although I admit I haven’t checked how well do they work), but you cannot get the VirtualBox equivalents.

If any Sun/ex-Innotek employee is reading me now, please consider freeing your guest additions! Then your problems with supporting different Linux distributions would be very much slimmed down: we could all package the drivers, so instead of having to connect the ISO image of the additions, mount it, install the kernel and Xorg development files, and compiling modules and drivers, the only required step would be for the distribution to identify VirtualBox like it was any other “standard” piece of real hardware.

I hope the serial device forwarding is working properly as I’ll need that too, and it did throw me a couple of errors since I started installing Fedora… I haven’t tried it yet though. I hope there are picocom packages for Fedora.

I bought a software license

I finally decided to convert my video library to Mpeg4 containers, H.264 video and AAC audio, rather than mixing and matching what I had before that. This is due to the fact that I hardly use Enterprise to watch video anymore. Not only because my office is tremendously hot during the summer, but more because I have a 32” TV set in my bedroom. Nicer to use.

Attached to that TV set there is an Apple TV (running unmodified software 2.0.2 at the moment) and a PS3. If you add to that all the other hardware that can play video I own, the only common denominator is H.264/AAC in MP4 container. (I also said before that I like the MP4 format more than AVI or Ogg). It might be because I do have a few Apple products (iPod and AppleTV), but also Linux handle this format pretty well, so I don’t feel bad about the choice. Beside, new content I get from youtube (like videos from Beppe Grillo’s blog) are also in this format — you can get them with youtube-dl -b.

Unfortunately, as I discussed before with Joshua, and as I tried last year before the hospital already, converting video to this format with Linux is a bit of a mess. While mencoder has very good results for the audio/video stream conversions, producing a good MP4 container is a big issue. I tried fixing a few corner cases in FFmpeg before, but it’s a real mess to produce a file that QuickTime (thus iTunes, and thus the Apple TV) can accept.

After spending a couple more days on the issue I decided my time is worth more than what I’ve been doing, and finally gave up to buy a tool that I have been told does the job, VisualHub for OSX. It was less than €20, and that is usually what I’m paid by the hour for my boring jobs.

I got the software, tried it out, the result was nice. Video and audio quality on par with mencoder’s but a properly working MP4 container that QuickTime, iTunes, AppleTV, iPod and even more importantly xine can play nicely. But the log showed a reference to “libavutil”, which is FFmpeg. Did I just pay for Free Software?

I looked at the Bundle, it includes a COPYING.txt file which is, as you might have already suspected, the text of GPL version 2. Okay, so there is free software in here indeed. And I can see a lot of well-known command line utilities: lsdvd, mkisofs, and so on. One nice thing to see is, though, an FFmpeg SVN diff. A little hidden, but it’s there. Good.

The doubt then was if they were hiding the stuff or if it was shown and I did just miss it. Plus it has to have the sources of everything, not just a diff of FFmpeg’s. And indeed in the last page of the documentation provided there is a link to this that contains all the sources of the Free software used. Which is actually quite a lot. They didn’t limit themselves to take the software as it is though, I see at least some patches to taglib that I’d very much like to take a look to later — I’m not sharing confidential registered-users-only information by the way, the documentation is present in the downloadable package that acts as a demo too.

I thought about this a bit. They took a lot of Free Software, adapted it, written a frontend and sold licenses for it. Do I have a problem with this? My conclusion is that I don’t. While I would have preferred is they made it more clear on the webpage that they are selling a Free Software-based package, and that they would have made the frontend Free Software too, I think they are not doing anything evil with this. They are playing well by the rules, and they are providing a working software.

They are not trying to exploit Free Software without giving anything back (the sources are there) and they did something more than just package Free Software together, they tested and prepared presets to use for encoding for various targets, included Apple TV which is my main target. They are, to an extent, selling a service (their testing and presets choices), and their license is also quite acceptable to me (it’s like a family license, usable on all the household’s computers as well as a work computer in an eventual office).

At the end of the day, I’m happy of spending this money as I suppose it’s also going to further develop the Free Software part of the software too, although I would have been happier to chip in a bit more if it was fully Free Software.

And most importantly, it worked out of the tarball solving me a problem I was having for more than an year now. Which means, for me, a lot less time spent trying to get the whole thing working. Of course if one day I could just do everything with simply FFmpeg I’ll be very happy, and I’ll dedicate myself a bit more on MP4 container support, both in writing and parsing, in the future, but at least now I can just feed it the stuff I need converted and dedicate my time and energy toward more useful goals (for me, as in paid jobs, and for the users with Gentoo).