No more WD for me, I’m afraid

So I finally went to ge tth enew disks I ordered, or rather I sent my sister since I’m at home sick again (seems like my health hasn’t recovered fully yet). I ordered two WD SATA disks, two Samsung SATA disks and an external WD MyBook Studio box with two disks, with USB, FireWire and eSATA interfaces. My idea was to vary around the type and brand of disks I use so that I don’t end up having problems when exactly one of them goes crazy, like it happened with Seagate’s recent debacle.

The bad surprise started when I tried to set up the MyBook; I wanted to set it up as RAID1, to store my whole audio/video library (music, podcasts, audiobooks, tv series and so on so forth), then re-use the space that is now filled with the multimedia stuff to store the archive of downloaded software (mostly Windows software, which is what I use to set up Windows systems, something that I unfortunately still do), ISO files (for various Windows versions, LiveCDs and stuff like that), and similar. I noticed right away that contrary to the Iomega disk I had before, this disk does not have a physical hardware switch to enable raid0, raid1 or jbod. I was surprised and a bit appalled, but the whole marketing material suggests that the thing works fine with Mac OS X, so I just connected it to the laptop and looked for the management software (which is inside the disk itself, rather than on a different CD, that’s nice).

Unfortunately once the software was installed, it failed to install itself in the usual place for Applications under OSX, and it also failed to detect the disk itself. So I went online and checked the support site, there was an upate to both the firmware of the drive (which means the thing is quite more complex than I’d expect it to be) and to the management software. Unfortunately, neither solved my issue, so I decided it had to be a problem with Leopard, and thus I could try with my mother’s iBook which is still running Tiger, still no luck. Even installing the “turbo” drivers from WD solved the problem.

Now I’m stuck with a 1TB single-volume disk set which I don’t intend to use that way, I’ll probably ask a friend to lend me a Windows XP system to set it up, and then hope that I’ll never have to use it, but the thing upsets me. Sure from a purely external hardware side it seems quite nice, but the need for software to configure a few parameters, and the fact that there is no software to do so under Linux, really makes the thing ludicrous.

Journey in absurd: open-source Windows

Today’s interesting reading is certainly Stormy Peter’s post about hypothetically open-sourcing Windows, while I agree that the conclusion is that Windows is unlikely to get open sourced any time soon, I don’t sincerely agree on other points.

First of all, I get the impression that she’s suggesting that the only reason Linux exists is to be a Free replacement for WIndows, which is certainly not the case; even if Windows were open-source by nature, I’m sure we’d have Linux, and FreeBSD, and NetBSD, and OpenBSD, and so on so forth. The reason for this is that the whole architecture behind the system is different, and is designed to work for different use-cases. Maybe we wouldn’t have the Linux desktop as we know it by now, but I’m not sure of that either. Maybe the only project that would then not have been created, or that could be then absorbed back into Windows, would be ReactOS.

Then there is another problem: confusing Free Software and Open Source. Even if Microsoft open-sourced Windows, adopting the same code would likely not be possible even for projects like Wine and ReactOS that would be able to use it as it is, because the license might well be incompatible with the rest of them.

And by the way, most of the question could probably be answered by looking at how Apple open sourced big chunks of its operating system . While there is probably no point in even trying to get GNU/Darwin to work, the fact that Apple releases code for most of its basic operating system does provide useful insights for stuff like filesystem hacking and even SCSI MMC commands hacking, even just being able to read its sources. It also provides access to the actual software which for instance give you access to the fsck command for HFS+ volumes on Linux (I should update it by the way).

Or if you prefer, at how Sun created OpenSolaris, although one has to argue that in the latter case there is much more similarity with Linux and the rest of *BSD systems that it says very little about how a similar situation with Windows would turn out to be. And in both cases, people still pay for Solaris and Mac OS X.

In general, I think that if Microsoft were to open-source even just bits of its kernel and basic drivers, the main advantages would again come out of filesystem support (relatively, since the filesystems of FreeBSD, Sun Solaris, NetBSD and OpenBSD are really not that well supported by Linux already), and probably some ACPI support that might be lacking in Linux for now. It would be nice, though, if stuff like WMI would then be understandable.

But since we know already that open-sourcing Windows is something that is likely to happen in conjunction with Duke Nukem Forever release, all this is absolutely absurd and should not be thought too much about.

Filesystems — take two

After the problem last week with XFS, today seems like a second take.

I wake up this morning with a reply about my HFS+ export patch, telling me that I have to implement the get_parent interface to make sure that NFS works even when the dentry cache is empty (which is what caused some issues with iTunes while I was doing my conversion most likely), good enough, I started working on it.

And while I was actually working on it, I find that the tinderbox is not compiling. A dmesg later shows that, once again, XFS had in-memory corruption, and I have to restart the box again. Thankfully, I got my SysRescue USB stick, which allowed me to check the filesystem before restarting.

Now this brings me to a couple of problems I have to solve. The first is that I finally have to switch /var/tmp to its own partition so that /var does not get clobbered if/when the filesystem go crazy; the second is that I have to consider alternatives to XFS for my filesystems. My home is already using ext3, but I don’t need performance there so it does not matter much; my root partition is using JFS since that’s what I tried when I reinstalled the system last year, although it didn’t turn out very good, and the resize support actually ate my data away.

Since I don’t care if my data gets eaten away on /var/tmp (the worst that might happen is me losing a patch I’m working on, or not being able to fetch the config.log for a failed package – and that is something I’ve been thinking about already), I think I’ll try something more “hardcore” and see how it goes, I think I’ll use ext4 as /var/tmp, unless it panics my kernel, in which case I’m going to try JFS again.

Oh well, time to resume my tasks I guess!

Filesystems

It seems like my concerns were a little misdirected; instead of the disks dying, the first problem appeared was an XFS failure on /var, after about two runs and a half of tree build. I woke up in the middle of the night with the scariest thought about something not being fine on Yamato, and indeed I came to see it not working any more. Bad bad bad.

I’m now considering the idea of getting a box to just handle all the storage problems running something a bit more tested lately: Sun’s ZFS. While Ted Tso’s concerns are pretty scary indeed, it seems like ZFS is the one filesystem that I could be using to squirm out all the possible performance and quality of disks, for network serving. And as far as I remember, Sun’s Solaris operating system comes with an iSCSI target software out of the box, which would really work out well for my MacBook’s needs too.

Now the problem is, does Enterprise still work? The motherboard is what I’m not sure about, but I guess I can just try that and then replace it if needed; I certainly need to replace the power supply since it’s now mounting a 250W, and I also need to replace the chassis, since the one I have now, mounting a Plexiglass side, and that makes it too noisy to stay turned on all the time.

I’m considering setting it up with four 500GB drives, which would cost me around 600 euro, included case and power supply; having eight, using the Promise SATA PCI card I have already, would bring me to 1K euro, and 4TB of space, but I don’t think it’s worth that yet. Both the Promise card and the onboard controller are SATA/150 but that shouldn’t be too much of a problem, with the Gigabit Ethernet being the bottleneck more than likely. Unfortunately this plan will not be enacted until I get enough jobs to finish paying for Yamato, and save the money for that.

Now, while I have to do with what I have, there is one problem. I have my video and music collection on the external Iomega drive, RAID1 “hardware”, 500GB of actual space divided roughly in 200GB for music/video and 300GB for OSX’s TimeMachine; the partition table is GUID (EFI) and the partitions are HFS+, so that if ever Yamato is turned off, I can access the data directly on the laptop through FireWire. This is all fine and dandy, if it wasn’t that I cannot move my iTunes folder in there because I cannot export the filesystem through NFS.

Linux does need kernel support for exporting filesystems through NFS, and the HFS+ driver in current Linux does not support this feature — yet. Because the nice thing about Linux and Free Software is that you can make them do whatever you wish as long as you have the skills to do that. And I hope I have enough skill to get this to work. I’m currently setting up a Fedora 10 install on vbox so that I can test my changes without risking a panic on my running kernel.

Once that’s working I’ll focus again on the tinderboxing, even though I cannot deal with the disk problem just yet. I have a few things to write about on that regard, especially about the problem of bundled libraries.

Between Mono and Java

Some time ago I expressed my feelings about C# ; to sum them up, I think it’s a nice language, by itself. It’s near enough C to be understandable by most developers who ever worked with that or C++ and it’s much saner than C++ in my opinion.

But I haven’t said much about Mono, even though I’ve been running GNOME for a while now and of course I’ve been using F-spot and, as Tante suggested, gnome-do.

I’ve been thinking about writing something about this since he also posted about Mono, but I think today is the best day of all, as there has been some interesting news in Java land.

While I do see that Mono has improved hugely since I last tried it (for Beagle), I do still have some reserves against Mono/.NET when compared with Java.

The reason for this is not that I think Mono cannot improve or that Java is technically superior, it’s more that I’m glad Sun finally covered the Java crap. OpenJDK was a very good step further, as it opened most of the important parts of the source code for others. But it also became more interesting in the last few days.

First, Sun accepted the FreeBSD port of their JDK into OpenJDK (which is a very good thing for the Gentoo/FreeBSD project!), and then a Darwin port was merged in OpenJDK. Lovely, Sun is taking the right steps to come out of the crap)

In particular, the porters project is something I would have liked to get involved in, if it wasn’t for last year’s health disaster.

In general, I think Java has now much more chances to become the true high-level multiplatform language and environment, over C# and Mono. This because the main implementation is open, rather than having one (or more) open implementations trying to track down the first and main implementation.

But I’d be seriously interested on a C# compiler that didn’t need Mono runtime, kinda like Vala.

The importance of an address book

When I was in Junior High (or rather the Italian age equivalent), I used to have a small address book with the phone numbers of the few people I knew. At the time, cellphones weren’t so widespread, even in Italy, and we were too young to use them for the standards of the time.

In the first years of High School, cellphones started to spread around, I ended up having one, and I had a Filofax-like organiser that I used to write the numbers and the addresses on.

At the third year, I actually prepared a simple table with addresses and numbers of all the class members for us and the teachers.

I guess I always had the feeling I needed to organise my contacts’ information so I could access it easily.

I used to keep an almost complete contact list on KDE’s Kontact, before it lost its data a couple of time and I had to recover it from a backup. Since then, my “master” address book has been OSX’s. The nice thing about OSX’s Address Book is that it’s very easy to sync with my phone, so that the phone book there is just a copy of the one in OSX. And since OpenSync supports Evolution and my phone, I can copy the stuff back on Enterprise.

The problem here is that iSync uses a vCard 3.0 format that seems to allow customised labels on phone numbers and addresses, while the phone only keeps the standard ones. iSync also does not allow to set the “preferred” number or email address, so every time I call or send a message to someone in my phonebook, the phone asks me for which number to use. But it’s a minor issue.

Last week I started cleaning up my phone book, and filled the blanks. Like the birthdays. Even though neither Symbian nor OSX merge the data from the address book to their calendars, it’s still useful to have it written down there (I then manually file the birthdays on Google Calendar).

The relatives names are also useful: even if I don’t have a contact for them, it’s much easier to look up a name there if you forgot how the sister (or brother) of your friend is called; or if your friend have a sister (or brother) at all!

But what is the point of all this? Well, I’m afraid I haven’t seen a portable device that has an address book good enough for me. I already written about some annoyances with Nokia but for what I can see, it’s still the best choice between iPhone and Windows Mobile, at least for what concern the Address Book (synching the Windows Mobile with OSX requires paying for software, synching iPhone with Linux is unlikely at all, to begin with; and yes I do want the two systems to share the same Address Book).

I decided to lease a phone through 3 (my provider) for when I’m in the hospital, and I then decided to go with Nokia again; the nice thing is that I can change the phone if I don’t like it, without changing the lease or spending more money on it. I decided to go with an E71, the updated model of the one I am currently using (the E61). I’ll write once I have tried it whether it works with vCard 3 yet, and whether it supports a few basic features, that I think should really be considered mandatory on advanced mobile address books:

  • custom labels for phone numbers, e-mail addresses, and postal addresses: people might just have home, office and mobile numbers, but offices might have multiple phone numbers, especially public offices; (note of colour: E61’s address book supports multiple phone numbers but NOT multiple addresses);
  • support for second names: the Nokia E61 is a very strange system on that note, when I added my second name to my contact on OSX (more about that I’ll write in the future), the E61 still seen me as “Diego Pettenò”, on the other hand, using Nokia PC Suite to copy over the address book to my mother’s 6288, it appeared in there with my full two names;
  • support for nicknames: very important; I have many people who share their first name, and a few who share their last name (I have seven people named Alberto, four named Marco, five named Andrea — that’s a male name in Italy), it’d much easier to identify them by writing their nickname rather than their name, but neither Symbian nor the iPhone address book accepts lookup by nickname, even though they have it saved in;
  • handle multiple possible inbound contacts calling: my sister and my brother in law, obviously, have the same home phone number; when a call come from that number, obviously the phone cannot be a psychic and it cannot tell me which one of the two is calling, but it would be nice if it shown at least two or three possible candidates rather than showing me the raw phone number at that point; for what it’s worth, I don’t want to remove the phone number from one of the two because when I’m looking for specifically one of the two, I open the contact page on the address book, and call first home, then the personal cellphone; if one of them didn’t have the home number I’d have to switch between two contacts;
  • show the contacts’ birthdays when they happen on the calendar: please, it’s the most basic of the features, Outlook 98 had it!

On a different note, I still haven’t found a way to easily synchronise my phonebook with the Siemens S450IP cordless: I know I can upload and download the phonebook as a single vCard file with multiple contacts, but each contact only can have one number, which makes it difficult to handle an address book that is organised with multiple numbers per contact (home, work, mobile).

Locales, NLS, kernels and filesystems

One issue that is yet to be solved easily by most distribution (at least those not featuring extensive graphical configuration utilities, like Fedora and Ubuntu do), is most likely localisation.

There is an interesting blog post by Wouter Verhelst on Planet Debian that talks about setting locales variables. It’s a very interesting reading, as it clarifies pretty well the different variables.

One related issue seems to be understanding the meaning of the NLS settings that are available in the kernel configuration for Linux. Some users seem to think that you have to enable the codepages in there to be able to use a certain locale as system locale.

This is not the case, the NLS settings in there are basically only used for filesystems, and in particular only VFAT and NTFS filesystems. The reason of this lies in the fact that both filesystems are case-insensitive.

In usual Unix filesystems, like UFS, EXT2/3/4, XFS, JFS, ReiserFS and so on, file names are case sensitive, and they end up being just a string of arbitrary characters. On VFAT and NTFS, instead, the filenames are case *in*sensitive.

For case sensitivity, you need equivalence tables, and those are defined by different NLS values. For instance, for Western locale, the character ‘i’ and ‘I’ are equivalent, but in Turkish, they are not, as ‘i’ pairs with ‘İ’ and ‘I’ with ‘ı’ (if you wish to get more information about this, I’d refer you to Michael S. Kaplan’s blog on the subject).

So when you need to support VFAT or NTFS, you need to support the right NLS table, or your filesystem will end up corrupted (on Turkish charset, you can have two files called “FAIL” and “fail” as the two letters are not just the same). This is the reason why you find the NLS settings in the filesystems section.

Of course, one could say that HFS+ used by MacOS is also case-insensitive, so NLS settings should apply to that too, no? Well, no. I admit I don’t know much about historical HFS+ filesystems, as I only started using MacOS from version 10.3, but at least since then, the filenames are saved encoded in UTF-8, which has very well defined equivalence tables. So there is no need for option selections, the equivalence table is defined as part of the filesystem itself.

Knowing this, why VFAT does not work properly with UTF-8, as stated by the kernel when you mount it as iocharset=utf-8? The problem is that VFAT works on a per-character equivalence basis, and UTF-8 is a variable-size encoding, which does not suit well VFAT.

Unfortunately, make oldconfig and make xconfig seem to replace, at least on my system, the default charset with UTF-8 every time, maybe because UTF-8 is the system encoding I’m using. I guess I should look up to see if it’s worth to report a bug about this, or if I can fix it myself.

Update (2017-04-28): I feel very sad to have found out over a year and a half later that Michael died. The links in this and other posts to his blog are now linked to the archive kindly provided and set up by Jan Kučera. Thank you, Jan. And thank you, Michael.

System headers and compiler warnings

I wish to thank Emanuele (exg) for discussing about this problem last night, I thought a bit about it, checked xine-lib in this regard, and then decided to write something.

Not everybody might know this, but GCC, albeit reporting tons of useful warnings, especially in newer versions, together with a few warnings that can get easily annoying and rarely useful, like pointer signs, ignores system headers when doing its magic.

What does this mean? Well, when a system library install an header that would trigger warnings, they are by default ignored. This is useful because while you’re working on the application foo you won’t care what glibc devs did. On the other hand, sometimes these warnings are useful.

What Emanuele found was missing by ignoring the warnings caused by the system headers was a redefinition of a termios value in Emacs for Darwin (OS X). I checked for similar issues in xine-lib and found a few that I’ll have to fix soonish.

I’m not sure how GCC handles the warnings suppression, I think it’s worth opening a bug for them to change its behaviour here, though, as the redefinition is a warning caused by the program’s code, not by the system headers.

Now of course I hear the KDE users to think “but I do get warnings from the system headers”, in reference to KDE’s headers. Well, yes:

In file included from /usr/kde/3.5/include/kaboutdata.h:24,
                 from main.cpp:17:
/usr/qt/3/include/qimage.h: In member function ‘bool QImageTextKeyLang::operator<(const QImageTextKeyLang&) const':
/usr/qt/3/include/qimage.h:58: warning: suggest parentheses around && within ||

this warning I took from yakuake build, but it’s the same for every KDE package you merge, more or less. It’s a warning caused by an include, a library include, but in general the same rules apply for those.

Why is not not suppressed? The problem is in how the inclusion of the path happens. Which is probably my main beef against system headers warning suppression: it’s inconsistent.

By default the includes in /usr/include (and thus found without adding any -I switch) get their warnings suppressed. If a library (say, libmad) installs its headers there, it will get its warnings suppressed.

On the other hand if a library installs its headers in an alternative path, like /usr/qt/3/include in the example above, or a more common /usr/include/foobar, then it depends on how that directory is added to the path of searched directories. If it’s added through -I (almost every case) its warnings will be kept; they would be suppressed only if you use -isystem. Almost no software uses that option that, as far as I know, it’s gcc specific.

So whether a library will have the warnings caused by its headers suppressed or not, depends on the path. Nice uh? I don’t think so.

More work!

Disk images

As I wrote before I’m currently converting my video archive to a format compatible with iTunes, AppleTV and PlayStation3. To do so, I’m archiving it directly on my OSX-compatible partitions, so that iTunes can access it. I’m actually thinking about moving it on an NFS-shared XFS partition, but for now it’s just HFS+.

To do so, though, I had to repartition the external 1TB hard disk, to give more space for the iTunes Library partition. For a series of reasons, I had to backup the content of the disk before, and now I’m restoring the data. To do the backup, I used the DMG image format that is native to OSX.

The nice thing of DMG is that it transparently handles encrypted images, compressed images, read-only images and read-write images. The compressed image format, that is what I used, was able to cut in half the space used up in the partition.

Now of course I know DMG is just a glorified raw image format, and that there are a lot of alternatives for Linux, but this made me think a bit about what the average user knows about disk and partition images.

I know lots of people who think that a raw image taken with dd(1) is good enough to be stored and used for every use. I don’t agree with that, but I can understand what the problem is with understanding why a raw image taken with dd is not just “good enough”.

The problem is that a lot of users ignore the fact that in a partition, together with the actual files’ data, there is space occupied by the filesystem metadata, the journal, and of course all the unused space is not always contiguous and it’s not always zeroed out.

Let’s take a common ad easy example that a lot of users who had to use Windows at one time will have no problem understanding: the FAT32 filesystem.

FAT filesystems tend to fragment a lot. Fragmentation does not only mean that files are sparse around the disk, but also that the free space is sparse between the fragments of files. Also, when you delete a file, its content is not removed from the disk, as you can guess if you ever used undelete-like tools to restore files that were deleted.

When you compress with tools like bzip2 or similar a file, the fact that it’s fragmented does not get in the way of the compression: if the file contains the same data repeated over and over it will be compressed quite easily. If the file is fragmented the same does not apply.

The fact that the free space is not zeroed out can cause lots of problems because a perfectly defragmented partition with 40GB of unused space cannot describe the unused space as “40GB of 0s”… unless.

Unless the compression algorithm knows about the filesystem. If, when you take an image of the partition, you use a tool that knows about the format of the partition, it starts to be much more useful. For FAT, that would mean that a tool could just move all the files’ data at the start of the partition, put all the unused space at the end, and consider it empty, zeroed out. The result is no more a 1:1 copy, but it’s probably good enough.

Now of course an alternative is just to use an archiving tool that can actually get all the files’ metadata (attributes, permissions, and so on), then you don’t have to worry about unused space at all. But that assumes you can use custom tools both for creating the image and to restore it. Creating an image of a partition could be quite easy to do with an already set-up complex tool, but there might be the need for the restore part of the code to be as lightweight as possible.

Now, I’m no expert on filesystems myself, so I might be wrong, or there might be the software doing this already out there. I don’t know. But I think it wouldn’t be too far fetched to think that there might be a software capable of taking an hard drive with an MBR partition table, with two FAT32 filesystems on it, both fragmented, with the unused space not zeroed out at all, and creating an image file that only needs bzip2 to be uncompressed, and dd to be written, but still providing a much smaller image file than a simple raw image compressed with bzip2, or maybe even rzip.

Such a tool could easily find duplicated files (which happens a lot on Windows because of duplicated DLLs, but can easily happen on Linux too because of backup copies for instance), and put them one near the other so to improve compression.

I know this post is quite vague on details, and that’s because as I said I’m not much of an expert on the field. I just wanted to make some users reflect on the fact that a simple raw image is just not always the perfect solution if what you need is efficiency in storage space.

I bought a software license

I finally decided to convert my video library to Mpeg4 containers, H.264 video and AAC audio, rather than mixing and matching what I had before that. This is due to the fact that I hardly use Enterprise to watch video anymore. Not only because my office is tremendously hot during the summer, but more because I have a 32” TV set in my bedroom. Nicer to use.

Attached to that TV set there is an Apple TV (running unmodified software 2.0.2 at the moment) and a PS3. If you add to that all the other hardware that can play video I own, the only common denominator is H.264/AAC in MP4 container. (I also said before that I like the MP4 format more than AVI or Ogg). It might be because I do have a few Apple products (iPod and AppleTV), but also Linux handle this format pretty well, so I don’t feel bad about the choice. Beside, new content I get from youtube (like videos from Beppe Grillo’s blog) are also in this format — you can get them with youtube-dl -b.

Unfortunately, as I discussed before with Joshua, and as I tried last year before the hospital already, converting video to this format with Linux is a bit of a mess. While mencoder has very good results for the audio/video stream conversions, producing a good MP4 container is a big issue. I tried fixing a few corner cases in FFmpeg before, but it’s a real mess to produce a file that QuickTime (thus iTunes, and thus the Apple TV) can accept.

After spending a couple more days on the issue I decided my time is worth more than what I’ve been doing, and finally gave up to buy a tool that I have been told does the job, VisualHub for OSX. It was less than €20, and that is usually what I’m paid by the hour for my boring jobs.

I got the software, tried it out, the result was nice. Video and audio quality on par with mencoder’s but a properly working MP4 container that QuickTime, iTunes, AppleTV, iPod and even more importantly xine can play nicely. But the log showed a reference to “libavutil”, which is FFmpeg. Did I just pay for Free Software?

I looked at the Bundle, it includes a COPYING.txt file which is, as you might have already suspected, the text of GPL version 2. Okay, so there is free software in here indeed. And I can see a lot of well-known command line utilities: lsdvd, mkisofs, and so on. One nice thing to see is, though, an FFmpeg SVN diff. A little hidden, but it’s there. Good.

The doubt then was if they were hiding the stuff or if it was shown and I did just miss it. Plus it has to have the sources of everything, not just a diff of FFmpeg’s. And indeed in the last page of the documentation provided there is a link to this that contains all the sources of the Free software used. Which is actually quite a lot. They didn’t limit themselves to take the software as it is though, I see at least some patches to taglib that I’d very much like to take a look to later — I’m not sharing confidential registered-users-only information by the way, the documentation is present in the downloadable package that acts as a demo too.

I thought about this a bit. They took a lot of Free Software, adapted it, written a frontend and sold licenses for it. Do I have a problem with this? My conclusion is that I don’t. While I would have preferred is they made it more clear on the webpage that they are selling a Free Software-based package, and that they would have made the frontend Free Software too, I think they are not doing anything evil with this. They are playing well by the rules, and they are providing a working software.

They are not trying to exploit Free Software without giving anything back (the sources are there) and they did something more than just package Free Software together, they tested and prepared presets to use for encoding for various targets, included Apple TV which is my main target. They are, to an extent, selling a service (their testing and presets choices), and their license is also quite acceptable to me (it’s like a family license, usable on all the household’s computers as well as a work computer in an eventual office).

At the end of the day, I’m happy of spending this money as I suppose it’s also going to further develop the Free Software part of the software too, although I would have been happier to chip in a bit more if it was fully Free Software.

And most importantly, it worked out of the tarball solving me a problem I was having for more than an year now. Which means, for me, a lot less time spent trying to get the whole thing working. Of course if one day I could just do everything with simply FFmpeg I’ll be very happy, and I’ll dedicate myself a bit more on MP4 container support, both in writing and parsing, in the future, but at least now I can just feed it the stuff I need converted and dedicate my time and energy toward more useful goals (for me, as in paid jobs, and for the users with Gentoo).