Protecting yourself from R-U-Dead-Yet attacks on Apache

Do you remember the infamous “slowloris” attack over HTTP webservers? Well, it turns out there is a new variant of the same technique, that rather than making the server wait for headers to arrive, makes the server wait for POST data before processing; it’s difficult to explain exactly how that works, so I’ll leave it to the expert explanation from ModSecurity

Thankfully, since there was a lot of work done to cover up the slowloris attack, there are easy protections to be put in place, the first of which would be the use of mod_reqtimeout… unfortunately, it isn’t currently enabled by the Gentoo configuration of Apache – see bug #347227 – so the first step is to workaround this limitation. Until the Gentoo Apache team appears again, you can do so simply by making use of the per-package environment hack, sort of what I’ve described in my previous nasty trick spost a few months ago.

# to be created as /etc/portage/env/www-servers/apache

export EXTRA_ECONF="${EXTRA_ECONF} --enable-reqtimeout=static"

*Do note that here I’m building it statically; this is because I’d suggest everybody to build all the modules as static; the overhead of having them as plugins is usually quite higher than what you’d have for loading a module that you don’t care about.*

Now that you got this set up, you should ensure to set a timeout for the requests; the mod_reqtimeout documentation is quite brief, but shows a number of possible configurations. I’d say that in most cases, what you want is simply the one shown in the ModSecurity examples. Please note that they made a mistake there, it’s not RequestReadyTimeout but RequestReadTimeout.

Additionally, when using ModSecurity you can stop the attack on its track after a few requests timed out, by blacklisting the IP and dropping its connections, allowing slots to be freed for other requests to arrive; this can be easily configured through this snipped, taken directly from the above-linked post:

RequestReadTimeout body=30

SecRule RESPONSE_STATUS "@streq 408" "phase:5,t:none,nolog,pass, setvar:ip.slow_dos_counter=+1,expirevar:ip.slow_dos_counter=60"
SecRule IP:SLOW_DOS_COUNTER "@gt 5" "phase:1,t:none,log,drop, msg:'Client Connection Dropped due to high # of slow DoS alerts'"

This should let you cover yourself up quite nicely, at least if you’re using hardened, with grsecurity enforcing per-user limits. But if you’re using hosting where you don’t have decision over the kernel – as I do – there is one further problem: the init script for apache does not respect the system limits at all — see bug #347301 .

The problem here is that when Apache is started during the standard system init, there are no limits set for the session is running from, and since it doesn’t use start-stop-daemon to launch the apache process itself, no limits are applied at all. This results in a quite easy DoS over the whole host as it will easily exhaust the system’s memory.

As I posted on the bug, there is a quick and dirty way to fix the situation by editing the init script itself, and change the way Apache is started up:

# Replace the following:
        ${APACHE2} ${APACHE2_OPTS} -k start

# With this

        start-stop-daemon --start --pidfile "${PIDFILE}" ${APACHE2} -- ${APACHE2_OPTS} -k start

This way at least the system generic limits are applied properly. Though, please note that start-stop-daemon limitations will not allow you to set per-user limits this way.

On a different note, I’d like to spend a few words on telling why this particular vulnerability is interesting to me: this attack relies on long-winded POST requests that might have a very low bandwidth, because just a few bytes are sent before the timeout is hit… it is not unlike the RTSP-in-HTTP tunnelling that I have designed and documented in feng during the past years.

This also means that application-level firewalls will start sooner or later filtering these long-winded requests, and that will likely put the final nail on the coffin of the RTSP-in-HTTP tunnelling. I guess it’s definitely time for feng to move on and implement real HTTP-based pseudo-streaming instead.

Some new notes about AppleTV

Another braindump so I can actually put in public what I’ve been doing in my spare time lately, given that most likely a lot of that won’t continue in the next months, as I’m trying to find more stable, solid jobs than what I’ve been doing as of lately.

If you follow me for a long time you might remember that a few years ago I bought an AppleTV (don’t laugh) for two main reasons: because I actually wanted something in my bedroom to watch Anime and listen to music and was curious about the implementation of it from a multimedia geek point of view. Now, a lot of what I have seen with the AppleTV is negative, and I’m pretty sure Apple noticed it just as well as I have. Indeed they learn from a lot of their previous mistakes with the release of the new AppleTV. Some of the “mistakes they learnt from” would probably not be shared by Free Software activists and hackers, as they were designed to keep people out of their platform, but that’s beside the point now.

The obvious problems (bulkiness, heat, power) were mostly fixed in hardware by moving from a mobile i686-class CPU to an ARM-based embedded system; the main way around their locks (the fact that the USB port is a standard host one, not a gadget one, and it only gets disabled by the lack of the Darwin kernel driver for USB) is also dropped, but only to be replaced with the same jailbreak situation they ahve on iPhone and other devices. So basically while they tried to make things lot more difficult, the result is simply that they hacked it in a different way. While it definitely looks sleeker to keep near your TV, I’m not sure I would have bought it if it was released the first time around this way.

At any rate, the one I have here is in its last months, and as soon as I can find something that fits into its space and on which I can run XBMC (fetching videos out of a Samba share on Yamato), I’ll probably simply get rid of it, or sell it to some poor fellow who can’t be bothered with getting something trickier but more useful. But while I want the device to actually accept the data as I have it already for what concerns Anime and TV series (assuming I can’t get them under decent terms legally), some months ago I decided that at least the music can bend over to the Apple formats — for the simple reason that they are quite reasonable, as long as I can play them just fine in Europe.

Beside a number of original music CDs (Metal music isn’t really flattered by the compression most online music stores apply!), I also have (fewer) music DVDs with videos and concerts; plus I sometime “procure” myself Japanese music videos that haven’t been published in the western world (I’m pretty much a lover of the genre, but they don’t make it too easy to get much of it here; I have almost all of Hikaru Utada’s discography in original forms though). For the formers, Handbrake (on OS X) did a pretty good job, but for the new music videos, which are usually in High Definition, it did a pretty bad job.

Let’s cue back FFmpeg, which, since last time I ranted actually gained a support for the mov/mp4 format that is finally able to keep up with Apple (I have reported some of the problems about it myself, so while I didn’t have a direct bearing in getting it to work, I can say that at least I feel more confident of what it does now). To be honest, I have tried doing the conversion with FFmpeg a few times already; main problem was to find a proper preset for x264 that didn’t enable features that AppleTV failed to work with (funnily enough, since Handbrake also uses x264, I know that sometimes even though iTunes does allow the files to be copied over the AppleTV, they don’t play straight). Well, this time I was able to find the correct preset on the AwkwardTV wiki so after saving it to ~/.ffmpeg/libx264-appletv.ffpreset the conversion process seemed almost immediate.

A few tests afterward, I can tell it’s neither immediate in procedure, nor in time required to complete. First of all, iTunes enforces a frame size limits on the file; while this is never a problem for content in standard definition, like the stuff I ripped from my DVDs, this can be a problem with High-Definition content. So I wrote a simple script (that I have pasted online earlier tonight but I’ll publish once polished a bit more) that through ffprobe, grep and awk could maintain the correct aspect ratio of the original file but resize it to a frame size that AppleTV is compatible with (720p maximum). This worked file for a few videoclips, but then it started to fail again.

Turns out, 1280×720 which is the 720p “HD Ready” resolution, is too much for AppleTV. Indeed, if you use those parameters to encode a video, iTunes will refuse to sync it over to the device. A quick check around pointed me at a possible reasoning/solution. Turns out that while all the video files have a Source Aspect-Ratio of 16:9, their Pixel Aspect-Ratio is sometimes 1:1, sometimes 4:3 (see Wikipedia’s Anamorphic Widescreen article for the details and links to the description of SAR and PAR). While Bluray and most other Full HD systems are designed to work fine with a 1:1 PAR (non-anamorphic), AppleTV doesn’t, probably because it’s just HD Ready.

So a quick way to get the AppleTV to accept the content is simply to scale it back to anamorphic widescreen, and you’re done. Unfortunately that doesn’t seem to cut it just yet; I have at least one video that doesn’t work even though the size is the same as before. Plus another has 5.1 (6-channels) audio, and FFmpeg seems to be unable to scale it back to stereo (from and to AAC).

Free Software Integralism — Helping or causing trouble?

So today we have two main news that seem to interest the Free Software advocates and, to some lesser extent, developers:

  • Apple officially decided to shun Java as well as Flash for their own operating system, discontinuing the development of their own JRE for one, and declaring that the latter won’t be available as part of the base operating system install in the next release (10.7 Lion);
  • the Free Software Foundation published the criteria under which hardware can be endorsed by the FSF itself.

I’ll start with the second point, first pointing directly to the post by Wouter Verhelst of Debian fame; as he points out, basically the FSF is asking for hardware manufacturer to forget that, up to now, Linux is really a niche market, and push only their way. “My way or the highway”.

Now this could make sense for the ideas embedded into the FSF, although it definitely shows a lack of marketing (I’ll wait and see if there will be any serious hardware manufacturer going to take this way, I bet… none). What should really upset users and developers alike is the reason why that is “[other badges] would give an appearance of legitimacy to those products” … what? Are you now arguing that not only proprietary software is not legitimate at all? Well, let’s give a bit of thought on the meaning of the word from Wiktionary:

  1. In accordance with the law or established legal forms and requirements; lawful.
  2. Conforming to known principles, or established or accepted rules or standards; valid.
  3. Authentic, real, genuine.

If you can’t call that integralism, then I don’t even think I can argue with you on anything.

Now on the other issue, Apple has decided that neither Flash or Java suit their need; in particular they are working on a “Mac App Store” that is designed, obviously, to put the spotlight (sorry for the pun) on the applications they find more complacent for their requirements. Is that such a bad thing? I’m pretty sure that Free Software Foundation is doing the same by trying to strive for “100% Free GNU/Linux distributions”. We’re talking of different requirements for acceptance, does one side have to be totally wrong, unethical, unlawful, rapist and homicidal at all? (Yes, I’m forcefully exaggerating here, bear with me).

What I find very upsetting is that instead of simply putting light on the fact that the requirements of the Mac App Store might not be in the best interest of users but in those of Apple, Free Software Advocates seem to be siding with Adobe and Oracle… wha? Are we talking of the same Adobe whose Flash software we all so much despise, the same Oracle who’s blamed for killing OpenSolaris?

But it seems that somehow Apple is a bigger problem than Adobe and Oracle; why’s that? Well, because somehow they are eating in the numbers that, if better handled at many levels, starting from the FSF itself, would have been there for Linux (GNU or not, at this point): the hacky geeks, the tech-savvy average person who’s tired of Windows; the low-budget small office who doesn’t want to spend half their money on Windows licenses…

In-fighting, religious fights for “purity”, elitism, all these things have put Linux and FLOSS in a very bad light for many people. And rather than trying to solve that, we’re blaming the one actor that at the time looked much less palatable, and became instead the most common choice: Apple.

On the other hand, I’d like to point out that when properly managed, Free Software can become much better than proprietaryware: Amarok (of 1.4 apex) took iTunes full on and become something better – although right now it feels like iTunes caught up and hadn’t been cleared up just yet.

So, wasn’t HTML5 supposed to make me Flash-free?

Just like Multimedia Mike, I have been quite sceptic regarding seeing HTML5 as a saviour of the open web. Not only because I dislike Ogg to a passion after having tried to parse it myself without the help of libogg (don’t get me started), but because I can pragmatically expect a huge number of problems related to serve multiple video files variant depending on browser and operating system. Lacking common ground, it’s generally a bad situation.

But I have been hoping that Google’s commitment to support HTML5 video, especially in Youtube, would have given me a mostly Flash-free environment; unfortunately that doesn’t seem to be the case. There is a post on the Youtube API blog from last month that tries to explain users why they are still required to use Flash. On the other hand, it has the sour taste that reminds me of Microsoft’s boasting about Windows Genuine Advantage. I guess that notes such as these:

Without content protection, we would not be able to offer videos like this .

to land me on a page that says at the top “This rental is currently unavailable in your country.” without any further notice, and without a warning that Your Mileage May Vary, makes it very likely to have a mixed feeling about a post like that.

Now, from that same post, I got the feeling that for now Google is not planning on supporting embedded Youtube using HTML5, and relied entirely on Flash for that:

Flash Player’s ability to combine application code and resources into a secure, efficient package has been instrumental in allowing YouTube videos to be embedded in other web sites. Web site owners need to ensure that embedded content is not able to access private user information on the containing page, and we need to ensure that our video player logic travels with the video (for features like captions, annotations, and advertising). While HTML5 adds sandboxing and message-passing functionality, Flash is the only mechanism most web sites allow for embedded content from other sites.

Very unfortunate, given that a number of website, including one of a friend of mine actually use Youtube to embed some videos; even my blog has a post using it. It’s still a shame, because it’s a loss, for Google, of the iPad users.. or is it, at all? I have played around a minute with an iPad at the local Mediaworld (Mediamarkt) last week. And I looked at my friend’s website with it. The videos load perfectly using HTML5 I guess, given that it does not support Flash at all.

So what’s the trick? Does Google provide HTML5-enabled embedded videos when it detects the iPhoneOS/iOS Safari identification in the user-agent? Is it Safari instead to translate the Youtube links into HTML5-compatible links? In the former case, why does it not do that when it detects Chrome/Chromium as well? In the latter, why can’t there be an extension to do the same for Chrome/Chromium?

Once again, my point is that you cannot simply characterize Apple and Google as being absolutely evil and absolutely good; there is no “pureness” in our modern world as it is, and I don’t think that trying to strive for that is going to work at all… extremes are not suited for the human nature, even extreme purity.

In all fairness

I know that Apple got a lot of hate from Free Software developers (and not) for the way they handle their App Store, mostly regarding the difficulty to actually get application approved. I sincerely have no direct experience on the thing, but if I apply what I learnt from Gentoo, the time they might take to get the applications approved sounds quite about right for a thorough verification.

Google on the other hand, was said to take much less time, but by personal experience to search for content on the Android Market, I can only find DVD Jon’s post quite on the line. There are a number of applications that are, if not entirely, on the verge of frauds, that got easily approved.

On the other hand, as soon as Google was found to add to the Froyo terms of services the fact that they reserve the option of remotely killing an application, tons of users cried foul. Just like they did for Apple, that also has the same capability and has been exercising it for applications that were later found not to agree with their terms of services.

*A note here: you might not like the way Apple insists on telling you what you should or should not use. I understand it pretty well, and that’s one of the reasons why I don’t use an iPhone. On the other hand, I don’t think you can say that Apple is doing something evil by doing so. Their platform, their choice; get a different platform for a different choice.*

So there are a number of people who think that Apple’s policy in reviewing application is evil (and Google’s allowing possible frauds is a-ok), and in both cases, the remote killswitch is something nasty and a way for them to censor the content for whatever evil plan they have. That points a black light on both of them, doesn’t it? But Mozilla should be fine, shouldn’t it?

I was sincerely wondering what those people who always find a way to despise “big companies” like Apple and Google at the same time, asking their users to choose “freer” alternatives (often times with worse problems) would think while I was reading Netcraft’s report of the malware addon found on the Mozilla index.

I quote: “Mozilla will be automatically disabling the add-on for anyone who has downloaded and installed it.” So Mozilla has a remote killswitch for extensions? Or how are they achieving this?

And again: “[Mozilla] are currently working on a new security model that will require all add-ons to be code-reviewed before becoming discoverable on addons.mozilla.org.” Which means they are going to do the same thing that Apple and Google already do (we’ll have to wait and see to find out to which degree).

Before people misunderstand me: I have nothing against Mozilla and I think they are on the right track here. I would actually hope for Google to tighten their approval process, even if that means much longer turnaround for new applications to be available. As an user, I’d find it much more reassuring than what we have right now (why half the demo/free versions of various apps want to access my personal data, hmm?).

What I’m trying to say here, is that we should really stop crying foul for any choice that Apple (or Microsoft, or Sony, or whoever) makes, they might have quite good reasons to do so, and we might actually follow their steps (like Mozilla appears to be going to do).

A few more reason why FatELF is not

Note that the tone of this blog post reflects on a Flameeyes of 2010, and does not represent the way I would be phrasing it in 2020 as I edit typos away. Unfortunately WordPress still has no way to skip the social sharing of the link after editing. Sorry, Ryan!

Seems like people keep on expecting Ryan Gordon’s FatELF to solve all the problems. Today I was told I was being illogical by writing that it has no benefits. Well, I’d like to reiterate that I’m quite sure of what I’m saying here! And even if this is likely going to be a pointless expedient, I’ll try to convey once again why I think even just by discussing that we’re wasting time and resources.

I have to say, most of the people pretending that FatELF is useful seem to be expert mirror-climber, so they changed so many ideas on how it should be used, where it should be used, and which benefits it has, that this post will jump from point to point quite confusingly. I’m sorry about that.

First of all, let me try to make this clear: FatELF is going to do nothing to make cross-arch development easier. If you want easier cross-arch or cross-OS development, you go with interpreted or byte-compiled languages such as Ruby, Python, Java, C#/.NET, or whatever else. FatELF focuses on ELF files, which are produced by the C, C++, Fortran compilers and the like. I can’t speak for Fortran as it’s a language that I do not know, but C and C++ datatypes are very much specific to architecture, operating system, compiler, heck even version of the libraries, system and 3rdparty! You cannot solve those problems with FatELF, as whatever benefits it has, they can only appear after the build. But at any rate, let’s proceed.

FatELF supposedly make build easier, but it doesn’t. If you really think so you have never ever tried building something for Apple’s Universal Binary. Support for autoconf and most likely any other build system along those lines, simply suck. The problem is that whatever results you get from a test in one architecture might not have the same result in the other. And Apple’s Universal Binary only encompass an operating system that has been developed without thinking too much of compatibility with others, and was under the same tight controls, where the tests for APIs are going to be almost identical for all the arches. (You might not know, but Linux’s syscall numbers are not the same across architectures; the reason is that they are actually designed to partly maintain compatibility with proprietary (and non) operating systems that originated on that architecture and were mainstream at the time. So for instance on IA-64 the syscall numbers are compatible with HP-UX, while on SPARC are compatible with Solaris, for the most part.)

This does not even come close to consider the mess of the toolchain. Of course you could have a single toolchain patched to emit code for a number of architectures, but is that going to work at all? Given that I have actually worked as a consultant building cross-toolchains for embedded architectures, I can tell you that it’s difficult enough to get one working. Count the need for patches for specific architectures, and you might start to get part of a pictures. While binutils already theoretically supports a “multitarget” build that adds in one build the support for all the architectures that they have written code for, doing the same for gcc is going to be a huge mess. Now you could (as I suggested) write a cc frontend that takes care of compiling the code for multiple architectures at the time, but as I said above it’s not easy to ensure the tests are actually meaningful between architectures, let alone operating systems.

FatELF cannot share data sections. One common mistake to make thinking about FatELF is that it only requires duplication of executable sections (.text), but that’s not the case. Data sections (.data, .bss, .rodata) are dependent on the data types, which as I said above are architecture dependent, and operating system dependent, and even library dependent. They are part of the ABI; each ELF you build for a number of target arches right now is going to have its own ABI, so the data sections are not shared. The best I can think of, to reduce this problem, is to make use of -fdata-sections and then merge sections with identical content; it’s feasible, but I’m sure that at the best of cases is going to create a problem with caching of near objects, and the best it’s going to cause misalignment of data to be read in the same pass. D’uh!

Just so you know how variable are the data sections: even though you could just use #ifdef, both the Linux kernel and the GNU C Library (and most likely uClibc as well even though I don’t have it around to double-check it) install different sets of headers; this should be an indication of how different the interfaces are between them.

Another important note here: as far as I could tell from the specifics that Ryan provided (I really can’t be arsed to look back at them right now), FatELF files are not interpolated, mixing the sections of them, but merged in a sort-of archive, with the loader/kernel deciding which parts of it will be loaded as a normal ELF. The reason for this decision likely lies in one tiny winy detail: ELF files were designed to be mapped straight from disk to data structures in memory; for this reason, ELF have classes and data orders. For instance x86-64 uses ELF of class 64 and order LSB (Least-significant bit first, or little-endian) while PPC uses ELF of class 32 and order MSB (Most-significant bit first, or big-endian). It’s not just a matter of the content of .text but it’s also pervasive in the index structures within the ELF file, and it is so for performance reasons. Having to swap all the data or deal with different sizes is not something you want to do in the kernel loader.

When do you distribute a FatELF? This is one tricky question because both Ryan and various supporters change opinion here more often they change socks. Ryan said that he was working on getting an Ubuntu built entirely of FatELFs, leaving to intend that distributions would be using FatELF for their packaging. Then he said that it wasn’t something for packagers but for Indipendent Software Vendors (ISVs). Today I was told that FatELF simplifies the distribution by distributing a single executable that can be “executed in place” rather than having to provide two of them.

Let’s be clear: distributors are very unlikely to provide FatELF binaries in their packages. Even though it might sound tenting to implement multilib with them, it’s going to be a mess just the same; it might reduce the size of the whole archive of binary packages, because you share the non-ELF files between architectures, but it’ll increase the used traffic to download them, and while disk space is mostly getting cheaper and cheaper, network traffic is still a rare commodity. Even more so, users won’t like to have installed stuff for architectures that they don’t use, and will likely ask for a way to clean them up, at which point they’ll wonder why they are downloading it at all. Please note that while Apple did their best to convince people to use Universal Binary, a number of software was produced to strip the alternative architecture files from their executable files at all.

Today I was offered, as I said, that it is easier to distribute one executable file rather than two. But when are you found doing that at all? Quite rarely; ISVs provide pre-defined packaging, usually in form of binary packages for the particular distribution (this already makes it a multiple-file download). Most complex software will be in an archive anyway because you don’t just build everything in but rather put it in different files: icons, images, … even Java software that actually use archives as main object format (JAR files are ZIP files), ships in archives installing separate data files, multiple JARs, wrapper scripts and so on so forth. I was also told that you could strip the extra architectures at install time, but if you do so, you might as well decide which of multiple files to install, making it moot to use a fat binary at all.

All in all, I still have to see one use case that actually can be solved by FatELF better than a few wrapper scripts and an archive. Sure you can create some straw-man arguments where FatELF works and scripts don’t, such as the “execute in-place” idea above, but really tell me when was the last time you needed that? Please also remember that while “changes happen everytime”, we’re talking about changing in a particularly invasive way a number of layers:

  • the kernel;
  • the loader;
  • the C library (separated from the loader!);
  • the compiler, has almost to be rewritten;
  • the linker, obviously;
  • the tools to handle the files;
  • all the libraries that change API among architectures.

Even if all of this became stock, there’s a huge marginal cost here, and it’s not going to happen anytime soon. And even if it did, how much time is going to take before it gets mainstream enough to be used by ISVs? There are some that sill support RHEL3.

There are “smaller benefits”, as, again, I was told before, and those are not nothing. Maybe that’s the case but the question is “is it worth it?”. Once in the kernel is not going to take much work at runtime, but is it work the marginal cost of implementing all that stuff and maintaining it? I most definitely don’t think so. I guess the only reason why Apple coped with that is that they had most of the logic code already developed and laying around from when they transitioned from M68K to PowerPC.

I’m sorry to burst your bubbles, but FatELF was an ill-conceived idea, that is not going to gain traction for a very good reason: it makes no sense! Any of the use-cases I have read up to now are straw-men, that either resemble what OSX does or what Windows does. But Linux is neither. Now, let’s move on, please?

Apple’s HFS+, open-source tools, and LLVM

The title of this post seems a bit messed up, but it’ll make sense at the end. It’s half a recount of my personal hardware trouble and half a recount of my fighting with Apple’s software, and not of the kind my reader hate to read about I guess.

I recently had to tear apart my Seagate FreeAgent Xtreme external HDD. The reasons? Well, beside leaving me without a connection while using it (with Yamato) on eSATA, and forcing me to use either Firewire or USB (both much slower — and I did pay it to use eSATA!), yesterday it decided it didn’t want to let me access anything via either of the three connections, not even after a number of power cycles (waiting for it to cool down as well); this was probably related to the fact that I tried to use it again as eSATA, connected to the new laptop to try copying an already set-up partition from the local drive to make space for (sigh) Windows 7.

Luckily, there was no data worth spending time on, in that partition, just a few GNOME settings I could recreate in a matter of minutes anyway.

Since the Oxford Electronics-based bridge on the device decided not to help me out to get my data back, I decided to break it up, with the help of a Youtube video (don’t say that Youtube isn’t helpful!), and took the drive itself out, which is obviously a Seagate 7200.11 1TB drive, quite a sturdy one to look at it. No I won’t add it at the 7th disk drive to Yamato, mostly because I fear it wouldn’t be able to start up anymore if I did so.

Thankfully, I bought a Nilox-branded “bay” a month or so ago, when I gave away what remained of Enterprise to a friend of mine (the only task that Enterprise was still doing was saving data out of SATA disks when people brought me laptops or PCs that fried up. My choice for that bay was due to the fact that it allows you to plug in both 3.5” and 2.5” SATA disks without having to screw them anywhere. It does look a lot like something out of the Dollhouse set, to be honest, but that doesn’t matter now.

I plugged it in, and started downloading the data; I can’t be sure it is all fine, so I deleted lots and lots of stuff I won’t be safe about for a while. Then I shivered, fearing the disk itself was bad, and that I had no way to check it out… thankfully, the bay uses Sunplus electronics in it, and – lo and behold! – smartmontools has a driver for the Sunplus USB bridge! A SMART test later, and the disk turns out to feel better than any other disk I ever used. Wow. Well, it’s expected as I never compiled on it.

Anyway, what can I do with a 1TB SATA disk I cannot plug into any computer as it is? Well, actually one thing I can do: backup storage. Not the kind of rolling backup I’m currently doing with rsnapshot and the WD MyBook Studio II in eSATA (anything else is just too slow to backup virtual machines), but rather a fixed backup of stuff I don’t expect to be looking at or using anytime soon. But to be on the safe side, I wanted to have it available in a format I can access, on the go, with the Mac as well as from Linux; and vfat is obviously not a good choice.

The choice is, for the Nth time, HFS+. Since Apple has published quite a bit of specs on the matter, the support in Linux is decent, albeit far from being perfect (I still haven’t finished my NFS export patch, it does not support ACLs or extended attributes, and so on). It’s way too unreliable for rsnapshot (with hardlinking) but should work acceptably well for the storage.

The only reason I have not to use it for something I want to rely on, as it is, is that the tools for filesystem creationa nd check (mkfs and fsck) are quite a bit old. I’m not referring to “hfsutils” or “hfsplusutils” both of which are written from scratch and have a number of problems, including but not limited to, shitty 64-bit code. I’m referring to the diskdev_cmds package in Gentoo which is a straight port of Apple’s own code, which is released as FLOSS under the APSL2 license.

Yes, I call that FLOSS! You may hate Apple as much as you wish, but even FSF considers APSL2 a Free Software license albeit with problems; on the other hand they explicitly state this (emphasis mine):

For this reason, we recommend you do not release new software using this license; but it is ok to use and improve software which other people release under this license.

Anyway, I went to Apple’s releases for 10.6.3 software (interestingly they haven’t published yet 10.6.4 which was released just the other day), and downloaded diskdev_cmds, and the xnu package that contains their basic kernel interfaces, and I started working on an autotools build system to make it possible to easily port the code in the future (thanks to git and branching).

The first obstacle, beside the includes obviously changing, was that Apple decided to make good use of a feature they implemented as part of Snow Leopard’s “Grand Central Dispatch”, their “easy” multi-threading implementation (somewhat similar to the concept of OpenMP): “blocks”. Anonymous functions for the C language, an extension they worked in LLVM. So GCC straight is unable to build the new diskdev_cmds. I could either go to fetch an older diskdev_cmds tarball, from Leopard rather than Snow Leopard, where GCD was not implemented yet, or I could up the ante and try to get it working with some other tools. Guess what?

In Gentoo we already have LLVM around, and the clang frontend as well. I decided to write an Autoconf check for blocks support, and rely on clang for the build. Unfortunately it also needs Apple’s own libclosure, that provides some interfaces to work with blocks. And the basis for the GDC interface. It actually resonated a bit when Snow Leopard was presented because Apple released it for Windows as well, with the sources under MIT license (very liberal). Unfortunately you cannot find it in the page I linked above but you have to look at 10.6.2 page for whatever reason.

I first attempted to merge this straight in the diskdev_cmds sources, but then I guessed that it makes more sense to try porting it alone, and make it available, maybe somebody will find some good use for it. Unfortunately the task is not as trivial as it looks. The package needs two very simple functions for “atomic compare and swap” which OS X provides as part of its base library, and so does Windows. On Linux, equivalent functions are provided by HP’s libatomic_ops (you probably have it around because of PulseAudio).

Unfortunately, libatomic_ops does not build, as it is, with clang/LLVM; there is a mistake in the code, or the way it’s parsed; it’s not something unexpected given that inline assembler is a lot compiler-dependent. In this case it’s a size problem: it uses a constraint for integer types (32-bit) but a temporary (and same-sized input) of type unsigned character (8-bit). The second stop is again libatomic_ops’s problem: while it provides an equivalent interface to do atomic compare and swap for long types, it doesn’t do so for int types; that means it works fine on x86 (and other 32-bit architectures where both types are 32-bit) but it won’t do for x86-64 and other 64-bit architectures. Guess what the libclosure code needs?

Now of course it would be possible to lift the atomic operations out of the xnu code, or just write them straight, as libatomic_ops already provides them all, just not correctly-sized for x86-64 but the problem remains that you then have to add a number of functions for the various architecture rather than having a generic interface; xnu provides functions only for x86/x86-64 and PPC (since that’s what Apple uses/used).

And where has this left me now? Well, nowhere far, mostly with a sour feeling about libatomic_ops inability to provide a common, decent interface (for those who wonder, they do provide char-sized inlines for compare and swap for most architecture, and even the int-sized alternatives that I was longing for… but only for IA-64. You wouldn’t believe that until you remembered that the whole library is maintained by HP.

If I could take the time off without risking trouble, I would most likely try to get better HFS+ support in Linux, if only to make it easier and less troublesome for OSX users to migrate to Linux at one point or another. The specs are almost all out there, the code as well. Unfortunately I’m no expert in filesystems and I lack the time to invest on the matter.

Why do FLOSS advocates like Adobe so much?

I’m not sure how this happens, but I see more and more often FLOSS advocates that support Adobe, and in particular Flash, in almost any context out there, mostly because they are now appearing a lot like an underdog, with Microsoft and Apple picking on them. Rather than liking the idea of cornering Flash as a proprietary software product out of the market, they seem to acclaim any time Adobe gets a little more advantage over the competition, and cry foul when someone else tries to ditch them:

  • Microsoft released Silverlight; which is evil – probably because it’s produced by Microsoft, or in alternative because it uses .NET that is produced by Microsoft – we have a Free as in Speech implementation of it in Novell’s Moonlight; but FLOSS advocates ditch on that: it’s still evil, because there are patents in .NET and C#; please note that the only implementation I know of Flash in the FLOSS world is Gnash which is not exactly up-to-speed with the kind of Flash applets you find in the wild;
  • Apple’s iPhone and iPad (or rather, all the Apple devices based on iPhone OS iOS) don’t support Flash, and Apple pushes content publishers to move to “modern alternatives” starting from the <video> tag; rather than, for once, agreeing with Apple and supporting that idea, FLOSS advocates decide to start name-calling them because they lack support for an ubiquitous technology such as Flash — the fact that Apple’s <video> tag suggestions were tied to the use of H.264 shouldn’t have made any difference at all, since Flash does not support Theora, so with the exclusion of the recently released WebM in the latest 10.1 version of the Flash Player, there wouldn’t be any support for “Free formats”;
  • Adobe stirs up a lot of news declaring support for Android; Google announces Android 2.2 Froyo, supporting Flash; rather than declaring Google an enemy of Free Software for helping Adobe spread their invasive and proprietary technology, FLOSS advocates start issuing “take that” comments toward iPhone users as “their phone can see Flash content”;
  • Mozilla refuses to provide any way at all to view H.264 files directly in their browser, leaving users unable to watch Youtube without Flash unless they do a ton of hacky tricks to convert the content into Ogg/Theora files; FLOSS advocates keep on supporting them because they haven’t compromised;

What is up here? Why should people consider Adobe a good friend of Free Software at all? Maybe because they control formats that are usually considered “free enough”: PostScript, TIFF (yes they do), PDF… or because some of the basic free fonts that TeX implementations and the original X11 used come from them. But all of this doesn’t really sound relevant to me: they don’t provide a Free Software PDF implementation, rather they have their own PDF reader, while the Free implementations often have to run fast towards, with mixed results, to keep opening new PDF files. As much as Mike explains the complexity of it all, the Linux Flash player is far from being a nice piece of software, and their recent abandon of the x86-64 version of the player makes it even more sour.

I’m afraid that the only explanation I can give to this phenomenon is that most “FLOSS advocates” line themselves straight with, and only with, the Free Software Foundation. And the FSF seem to have a very personal war against Microsoft and Apple; probably because the two of them actually show that in many areas Free Software is still lagging behind (and if you don’t agree with this statement, please have a reality check and come back again — and this is not to say that Free Software is not good in many areas, or that it cannot improve to become the best), which goes against their “faith”. Adobe on the other hand, while not really helping Free Software out (sorry but Flash Player and Adobe Reader are not enough to say that they “support” Linux; and don’t try to sell me that they are not porting Creative Suite to Linux just so people would use better Free alternatives).

Why do I feel like taking a shot at FSF here? Well, I have already repeated multiple times that I love the PDFreaders.org site from the FSFe; as far as I can see, FSF only seem to link to it in one lost and forgotten page, just below a note about CoreBoot … doesn’t make it any prominent. Also, I couldn’t find any open letter that blame PDF for being a Patent-risky format, which instead is present in the PDFreaders site:

While Adobe Systems grants a royalty-free use of any patents to the PDF format, in any application that adheres to the PDF specifications, other companies do hold patents that may limit the openness of the standard if enforced.

As you can see, the first part of the sentence admits that there are patents over the PDF format, but royalty-free use is granted… from Adobe at least, but nothing from eventual other parties that might have them.

At any rate, I feel like there is a huge double-standard issue here: anything that comes out of Microsoft or Apple, even with Free Software licenses or patent pledges is evil; but proprietary software and technologies from Adobe are fine. It’s silly, don’t you think so?

And for those who still would like to complain about websites requiring Silverlight to watch content, I’d like to propose a different solution to ask for: don’t ask for them to provide it with Flash, but rather with a standard protocol, for which we have a number of Free Software implementations, as well as being supported on the mainstream operating systems for both Desktops and mobile phones: RTSP is such a protocol.

Apple and Linux do it differently

I’m not sure why it is that so many people from one side take shots at Apple whenever they have the chance, but at the same time are totally ready to “copy” their ideas – whether or not they make sense.

You probably remember when I criticised the FatELF idea stating, black-on-white, that the idea only barely made sense for Apple as they had the knowledge and design for it already in their possession, while the required investment for Linux developers would just be unjustified compared to the positive effects to be gained by such an approach.

On a similar note, I never understood why people expected “dock-like” software to work on Linux just as well as it does on OSX. It probably has to do to the fact that they have seen OSX on screenshot, videos, maybe a local store, but never used it enough to notice how there are subtle differences between the behaviour of software between the two of them.

As far as I can see, the new Ubuntu Lucid screenshots that are being shown around the net prominently feature a “dock-like” bar on the bottom; I have no idea if that’s the default Lucid interface yet, I don’t have enough time to update my virtual machines, especially not in this moment, but I fear that if it is, they haven’t rewritten most of the software available for X11 to behave like the software for OSX does — well, most of it at least.

A friend of mine, average-time Apple user, quite the geeky kind of person, actually went as far as asking me since when Linux learnt about applications handling from OSX. As I said, I doubt it has, yet. Whether this is good or bad, it’s widely a matter of taste. Without entering a dispute that could be too long to be worth the effort, I’d like to explain what me and him are referring to in this instance. The Dock in OSX is not just a launcher for applications, it also replaces what Windows and most of the GUIs for Unix I know of call the “taskbar”, showing which applications, windows are open… you might find here the first clue that something’s off.

On Windows, as well as X11, you have a record of which windows are open. The same software (application) can have multiple processes, each with zero or more windows, but you generally track windows in the taskbar. You get around this usually by the use of the tray icons, or notification area icons. Software like Pidgin creates an icon in the tray, you can use that to show or hide the contacts’ list window, even when no other windows are open for that application/process.

A number of application domains in the years started heavy-handedly using the notification area, or system tray, to feign a “transparent” application, running in the background: instant messengers, audio players and… OpenOffice. Both in Windows (for a long time) and in Linux (just recently as far as I can tell), OpenOffice has a “quicklaunch applet” that is basically just a system tray icon that keeps an OpenOffice process running while you have no OpenOffice document open. I’m quite sure there used to be a similar “applet” for Firefox but I don’t think it ever reached Linux, and I’m not sure whether it still exists for Windows. Recounting all the possible causes and cases for this to have happened is definitely out of my reach, but I can make more than a few (in my opinion) acceptable hypothesis.

The first starts by looking at Windows, in particular the “huge” step between Windows 3.1 and Windows 95, one I have lived myself (yeah, I have used Windows 3.1, admittedly just to play games and little more, I was too young to do anything useful with it, but I recall it as well as I recall Windows 2.1!). Applications’ windows in the older 3.x series of the interface (it wasn’t really an Operating System at the time), once minimised, went straight to the desktop – the “desktop-as-a-folder” idea that we grew so used to, at the point that KDE got quite a lot of flak for touching that idea with Plasma, is something that, unless I recall incorrectly, came from the Macintosh, and only reached “mainstream” Windows systems with 95 and later – even at 640×480 the desktop was big enough to keep visible icons for many windows (it is, on the other hand, questionable wether the computers could run as many applications, but that’s a different problem here; probably they could if they were all notepads). On Windows 95 instead, the desktop became a folder, even if a slightly strange one, and the taskbar was introduced.

The taskbar was small, and you could only open a handful of windows before their title became unreadable; at that point, things like WinAMP and other background-running didn’t look like they needed the taskbar entry at all; the system tray was also present, so they decided to have this “minimise to tray” (or “ignore taskbar” option). Given that Windows lacked the idea of daemons, and that the non-NT kernels even lacked the idea of services, all the background, or long-lasting, processes added to the trayicons. The idea became so universal that when I was toying with Borland C++ Builder (with the free-as-in-beer licenses they distributed with magazines here in Italy), there were tons of components that let you add a single extra button on the title bar: “Minimise to tray”.

The same idea then came down to Linux (in general, Unices) – because of the same taskbar/trayicon present in KDE and GNOME – even though the problem tried to be resolved by adding the group-by-application feature to the taskbar. The same feature reached Windows with XP, while the NT-kernel series – mainstreaming with the same version, even more than the previous Windows 2000 – already had services, equivalent to the Unix daemons.

Let’s look back at the other “notification area icons” (yes I’m switching terminology back and forth between Windows and Linux, for one – in my opinion – very good reason: they are the same concept, the same thing; FreeDesktop tried forcing them to be different things, but they are used in the same way, and are thus the same; Ubuntu is now trying to change that once again, not sure how that will turn out): the quicklaunch applets.

Why do OpenOffice and Firefox (and Microsoft Office itself, a long time ago, I think had the same thing… maybe Office 97? I lost sight of that stuff, with the years) need such hacks? Well the answer is functionally simple: to take less time to load a document. Reduce the time-to-startup of the application and the user will be happy, but how do you do that? Make sure that application is already running, invisible to the user… in the system tray. It gets especially important in Windows because the process creation time has historically been worse there than it is (was?) in Unix, and Linux.

I’m not an expert enough to know why it is on Windows, and X11, that the applications close their process (or main process, in the case of things like Chrome) when all their windows are closed, it’s probably related to the fact that you might not want to have processes running you cannot see at all (although you’re full of such things nowadays on both environments!), but so it is: minus the quicklaunch applets named above, once you close the last OpenOffice document window, or the last Firefox window, their process is gone, and the next start up will require you to start the application from nothing (which is, obviously, pretty slow even on Unix).

This is not the case on OS X, as there you have a running application, which may or may not have windows open… when you close all the windows, the application keeps running, invisible to you if not by a glow (or an arrow on pre-Snow Leopard versions) under its icon in the dock. Keep the application with running processes, you don’t have to load them from scratch even if all their windows are closed away. Actually, I think would be very nice to have in Linux, if used as a desktop.

I heard of lots of software that tries to simulate the same behaviour, by dividing in backend and frontend applications (and thus, processes), and using D-Bus or other methods to communicate to each other. How well this is working out, I cannot judge. I certainly can find why this is an interesting approach, but until all applications behave like this (if ever!), then I don’t think “dock-like” software is apt for Linux.

This is nothing different than using the development process of applications used under OS X to Linux, as I found done with DropBox

Can we be not ready for a technology?

İsmail is definitely finding me some topics to write about lately… this time it was in relation to a tweet of mine ranting on about Wave’s futility, I think I should elaborate a bit about this topic.

Regarding the Wave rant, which adds to my first impression posted a few weeks ago, I think things are starting to go downhill. From one side, more and more people started having Google Wave so you can find people to talk with, from the other, of the Waves I received, only one was actually interesting (but still nothing that makes me feel like Wave was useful), the rest falls into two categories: from one side, you get the ping tests, which I admit I also caused – because obviously the first thing you do in something like Wave is pinging somebody you feel comfortable to talk with – and on the other hand I had three different waves of people… discussing Wave itself.

And you know that there is a problem when the medium is mostly used to discuss itself.

And here is where me and İsmail diverge: for him the problem is that “we’re not ready” for the Wave technology; myself, I think that the phrase “we’re not ready” only can come out of a sci-fi book, and that there is something wrong with the technology if people don’t seem to find a reason to use it at all. But I agree with him when he says that some technologies, like Twitter, would have looked definitely silly and out of place a few years ago. I agree because we have had a perfect example that is not hypothetical at all.

You obviously all do know Apple’s Dashboard, from which even the idea of Plasma for KDE seems to have come from, and from which Microsoft seemingly borrowed heavily for the Vista and Win7 desktop. Do you think Apple was the first to think about that stuff? Think again.

It was 1997, and Microsoft released Internet Explorer 4, showing off the Active Desktop… probably one of the biggest failures in their long-running career. The underlying idea is not far at all from that of Apple’s “revolutionary” Dashboard: pieces of web pages to put on your desktop. At the same time, Microsoft released one of their first free development kits: Visual Basic 5 Control Creation Edition (VB5CCE) that allowed you to learn their VB language, and while you couldn’t compile applications to redistribute, you could compile ActiveX controls, which could in turn be used by the Active Desktop page fragments. Yes, I did use VB5CCE; it was what let me make the jump from the good old QBasic to Windows “programming”.

So, if the whole concept of Dashboard (and Plasma, and so on) makes people so happy now, why did it fail at the time? Well, to use İsmail’s words “we weren’t ready for it”; or to use mine, the infrastructure wasn’t ready. At the time, lots of users were still not connected to any network, especially outside of the US; staying connected costed, a lot, and bandwidth was limited, as were the resources of the computers. Those of us (me included) who at the time had no Internet connection at all, were feeling deprived of resources for something totally useless for them; those who had dial-up Internet connections would feel their bandwidth be eaten up by something they probably didn’t care enough about.

Who was at fault here? Us for not wanting such nonsense running on our already underpowered computers, or Microsoft for wanting to push out a technology without proper infrastructure support? Given the way Apple was acclaimed when they brought Dashboard to their computers, I’d say the latter, and they actually paid the price of pushing something out a few years too early. Timing might not be everything, but it’s definitely something.