Packaging EPSON’s scanners’ drivers

You might remember that a couple of years ago, I bought an EPSON scanner – a GT-S50, no longer sold, replaced by the GT-S55 – to scan invoices and other kind of documents that I had laying around. The investment paid off, in my eyes, because now I have all my documents digitized, always available, and I was able to fill quite a few bags of shredded paper. The amount of work involved in digitizing that many documents without a professional scanner would have costed me in time much more than the device costed me in money.

But as I noted in the blog post above, to get the scanner to work on Gentoo Linux, I had to create ebuilds for a new package with the proprietary plugin that can handle the scanner — it’s not a sane plugin per-se, it’s rather a plugin for the EPSON-provided epkowa backend, which is otherwise opensource. I actually ended up adding two packages to the tree to be able to do that.

Nowadays, the media-gfx category hosts a number of plugins for epkowa, under iscan-plugin and esci-interpreter prefixes — the latter is the case for the GT-S50; as far as I can tell, the main difference between the two is that the latter ships with no firmware, but just a protocol interpreter, as the name suggests. Together with this, I maintain another ebuild for the plugin (32-bit only) and firmware (32- and 64-bit) for my older Perfection 2580 scanner, plus two more packages that I proxy for users who can actually test on good hardware. Today I even added another package for a scanner I don’t own, but I plan to buy when I settle down in my new residency (the 2580 stays here with my mother), the Perfection V370 which is the first one in our tree that actually uses “perfection” in the RPM name.

The ebuilds are almost identical to one another, but there are quite a few tricky bits around them. First of all, we don’t install in exactly the same locations as the RPMs — the reason for this is that they use /usr as base path for everything, but in our case, since these are provided by an external source, and are pre-built, they are installed in /opt instead, with the exception of the firmware, that stays in /usr/share simply because that’s where some of the software looks for it (namely, snapscan backend for SANE).

Some of the plugins are valid for multiple USB IDs, as is the case for the GT-S50/55/80/85 interpreter, while others are only valid for a particular ID – which may refer to multiple models anyway, as sometimes it refers to the main controller on the device, which may be shared — in the case of the 2580 noted above, it shares the same flatbed with the 2480; the difference between the two models is all in the lid, in both it hosts the lamp for scanning film, but in the 2580 it also includes the motor to load the film semi-automatically.

You can easily see which IDs the plugin applies to by using strings on the rpm and look for iscan-registry — but at this point question should be: where do you get the RPM? Well, unfortunately AVASYS no longer provides the download at a known location; EPSON themselves are distributing it, through Akamai — which means that we can’t point to the upstream servers anymore, and euscan won’t be able to find a new version being released. To solve this situation, the solution we opted for is to mirror the rpm files on my webspace on dev.gentoo.org including the needed documentation. It’s not very flexible, of course, but at least it works and it does not require us to fetch-restrict anything.

It’s hard for me to tell how many more plugins for iscan are out there, but if your scanner is among them, feel free to send an ebuild for it on our Bugzilla and CC me — you can base yourself off media-gfx/iscan-plugin-perfection-v370 which is the latest I committed.

Why can’t I get easy hardware

When I bought my Latitude I complained that it seemed to me more and more like a mistake — until the kernel started shipping with the correct (and fixed) drivers, so the things that originally didn’t work right (the SD card reader, the shutdown process, the touchpad, …) started working quite nicely. As of September 2011 (one year and a quarter after I bought it), between Linux and firmware updates from Dell and Broadcom, the laptop worked almost completely — only missing par still is the fingerprint reader, which I really don’t care that much about.

Recently, you probably have seen my UEFI post where I complained that I couldn’t install Sabayon on the new Zenbook (which is where I’m writing from, right now, on Gentoo). Well, that wasn’t the only problem I got with this laptop, and I should really start reporting issues to the kernel itself, but in the mean time let me write down some notes here.

First off, the keyboard backlight is nice and all, but I don’t need it – I learnt to touch-type when I was eight – so it would just be a battery waste of time. While the keys are reported correctly, and upower supports setting the backlight, at least the stable version of KDE doesn’t seem to support the backlight setting. I should ask my KDE friends if they can point me in the right direction. Another interesting point is that while the backlight is turned on at boot, it’s off after suspension — which is probably a bug in the kernel, but it’s working fine for me.

Speaking about things not turning back on after suspension, the WLAN LED on the keyboard is not turning on, at resume. And related to that, the rfkill key doesn’t seem to work that well either. It’s not a big deal but it’s a bit bothersome, especially since I would like to turn off the bluetooth adapter only (and since that’s supposedly hardware-controlled, it should get me some more battery life).

The monitor’s backlight is even more troublesome: first problem is deciding who should be handling it — i’s either the ACPI video driver (by default), the ASUS WMI driver, or the Intel driver — of the three, the only one that make it work is the Intel driver, and I’m not even sure if that’s actually controlling the backlight or just the tint on the screen, even though, when set to zero, it turns the screen OFF, not just display it as black. It does make it bearable though.

The brightness keys on the keyboard don’t work, by the way, nor does the one that should turn on and off the light sensor — the latter, isn’t even recognized as a key by the asus-wmi driver, and I can’t be sure of the correct device ID that I should use to turn on/off said light sensor. After I hacked the driver to not expose either the ACPI or the WMI brightness interfaces, I’m able to set the brightness from KDE at least — but it does not seem to stick, if I take it down, and after some time it starts and gets back to the maximum (when the power is connected, at least).

And finally, there is the matter of the SD card reader. Yesterday I went to use it, and I found out that … it didn’t work. Even though it’s an USB device, it’s not mass-storage — it’s a Realtek USB MMC device, which does not use the standard USB interface for MMC readers at all! After some googling around, I found that Realtek actually released a driver for that, and after some more digging I found out that said driver is currently (3.7) in the staging drivers’ tree as a virtual SCSI driver (with its own MMC stack) — together with a PCI-E peer, which has been already rewritten for the next release (3.8) as three split drivers (a MFD base, a MMC driver, and a MemoryStick driver). I tried looking into working on porting the USB one as well, but it seems to be a lot of work, and Realtek (or rather, Realsil) seems to be already working on it to port it to the real kernel, so it might be worth waiting.

To be fair what dropped away the idea from me of working on the SD card driver is that to have an idea of what’s going on I have to run 3.8 — and at RC1 panics as soon as I re-connect the power cable. So even though I would like to find enough time to work on some kernel code, this is unlikely to happen now. I guess I’ll spend the next three days working on Gentoo bugs, then I have a customer to take care of, so this is just going to be dropped off my list quite quickly.

Back to Cinnamon!

Okay after my negative experience I’m now back to try Cinnamon, and I have a quite different story to tell.

First of all, thanks to Julian, I found that the issue I was having with the keyboard only involved my user’s settings, and not the code by itself. After a bit more fiddling around, I found a more detailed case where this happens.

My keyboard layout of choice is the so-called US Alternate International (us alt-intl); unfortunately even with the recent improvements in Input hotplug for Xorg, it seems like configuring Xkb on the configuration file, or on the directories, is not recognized by the evdev driver (at least), which is why I configured it with GNOME, Xfce, and, by reflection, Cinnamon, to change the layout of the keyboard — and that’s where the problem lies. When GNOME3 or Cinnamon are started, and they set the keyboard layout (both do it the same way as in both cases you configure it through GSettings), the Alt_L and Meta keys get somehow swapped… but not in all cases, as Emacs was still getting it right (which meant that just switching the two is not going to help me).

I guess I should track it down even further than this, but for the moment, I solved this by using setxkb in my ~/.xsession file, and that doesn’t seem to cause any trouble.

The other issue I reported, was that clutter is unstable when using nvidia hardware; to be precise, the issue is with nvidia-drivers (surprised?), and it’s actually the second issue that I find with their drivers in the last few weeks: the other was Calibre being able to get Xorg to eat so much memory that the system gets unresponsive, just by the sake of being launched, and add to that Skype that was unable to render *at all*….

So I decided to give a try to what Pesa suggested me to solve the Skype (Qt) issue: I decided to try the nouveau driver. Actually I wanted to try that a couple of weeks ago, but after reading through their website I wasn’t sure that my video card (NVIDIA GT218) was supported or if I had to deal with dumping the firmware out of the nvidia drivers and whatever else… but the other day, after screwing my own system over, and needing to boot from SysRescue, I found out that the driver they load is … nouveau, and it worked decently well.

So since this is weekend, and the time was right, I decided to give it a try — and the results are great! Clutter works fine, Cinnamon works fine, and while I haven’t tried anything that is running in OpenGL proper yet (no games), and Google’s Maps GL reports not working with this implementation (not that I care much), it works definitely well enough for what I usually do. I haven’t tried audio out on the DisplayPort connection, but it’s not like I’ve ever tried it before… Suspension works very fine.

And yes for now my experience with Cinnamon is terrific!

On stabling hardware-dependent packages

I thought I wrote about this before but it looks like I didn’t. And since this was something I was asked about a few months ago as well, any time is good to fix the issue by writing what I think about this.

We could say that almost every package in the tree relies on someone having at least some pieces of hardware: a computer. At the same time, some packages require more particular hardware than others: drivers, firmware, and similar packages. The problem with these packages is that, unless you own the hardware, you have no way to know that they work at all. Sure you can be certain that they do not work, if they fail to build, but the opposite can’t be confirmed without the hardware.

This is troublesome to say the least, as sometimes a given hardware driver’s ebuild is written, by either a developer or an user (that goes on to proxy it), when they have the hardware … but it goes untouched once said hardware is gone for good. This happened to me before, for instance I no longer have a Motorola phone to test those handlers; nor I have an HP printer any longer, so there goes my help to the hplip package…

At the same time it is true I have quite a few hardware-dependent packages in the tree still: ekeyd, iwl6000-ucode, and so on so forth. One of the packages I still try to help out with even though I had to discard the related hardware (for now at least) is freeipmi, which exemplifies all too well the problems with hardware-dependent packages.

FreeIPMI, like the name leaves to imply, is an implementation of utilities and daemons to manage IPMI hardware, which is used on middle-range-to-high-end server motherboard for remote management. I had one on Yamato until recently, when either the mainboard or the PSU started to act up and I had to take it out. I had tried to making the best out of the ebuild since I had access to an IPMI board, and that consisted mostly on bumping it, fixing the build and, recently, updating its init scripts so that they actually work (in the case of the watchdog service I ended up patching it up upstream, which meant having something that actually works as intended!).

Last year, after about two years from the release of the version that was marked stable, I decided to ask for a new stable with a version of the package that surely worked better … which is not the same to say that it worked perfectly. Indeed, now I know that said version simply did not work in some configurations, at all, because the bmc-watchdog init script, as I said, did not work and required upstream changes.

What happens when I ask for a new stable this year, to cover for that issue? Well, this time around the arch teams finally decided to test the stuff they mark stable, but that also meant nobody was available to test IPMI-related packages on x86. Even our own infra team was unable to test it anywhere else beside amd64, with the end result that after months of back-and-forth, I was able to get the old package de-keyworded, which means that there is no longer a “stable” freeipmi package; you’ve got to use ~arch.

Don’t get me wrong: I’m not happy about the conclusion, but it’s better than pretending that an ancient version works.

So when Vincent – who proxy maintains the acr38u IFD driver for pcsc-lite – asks me about marking his ebuild stable, I plan on writing this very post.. and then leave it there for over five months… I apologize, Vincent, and I don’t think I have enough words to express my trouble with a delay of such proportions.

And when I get Paweł’s automated stable request for another pcsc-lite driver, my answer is an obvious one: does anybody have the hardware to test the package? I’m afraid the answer was obviously no… unless Alon still has said reader. End result is the same though: time to de-keyword the package so that we can drop the ancient ebuild, which is definitely not adhering to modern standards (well okay, -r2 isn’t much better, but I digress).

Of course the question here would be “how do you mark stable any pcsc-lite related software at all this way?” … and I don’t really have a real clear answer to that. I guess we’re lucky that the ccid driver covers so many hardware devices that it makes it much more likely that somebody has a compatible reader and some card to test it with… the card part is easy, as I suppose most people living in the so-called Western world have an ATM or credit card… and those have a chip that can be at least detected, if not accessed, by PC/SC.

There is actually a script written in Python that allows you to access at least some of the details on EMV based cards.. the dependencies should all be in Portage, but I didn’t have time to play with the code for long enough to make sure it works and it is safe to use.

There is another obvious question of course: “why don’t you stable your own stuff?” — while this could be sensible, there is a trick: my “desktop” systems – Yamato, Titan (this box) and Saladin (the laptop) – are all running ~arch for as long as I can remember… or at least part of it; I gave up on glibc-2.14 so it is masked on all of my systems, on count of breaking Ruby and I’m still pissed by the fact it was unmasked even if that was known.

Any comment to help picking up a direction about this kind of trouble?

I so much hate brand computers

You might probably know already that I actually get a good deal of my funds as a computer technician, cleaning up and supporting Windows computers for homes and small offices. Sometimes, this also has the good side effect of having people introduced to the new (for them) world of Free Software. On the other hand, the money comes handy especially given that tinderboxing costs and I’m definitely not paid for most of the work I do there.

Beside actually supporting boxes that are already there, from time to time I also build new boxes for friends who ask for my help to choose a new box, cheap and powerful enough for their needs (and I have to say that between AMD and Intel, prices lately really dropped on desktops).

What actually is my personal bane is with brand computers: HP, Acer, Toshiba, you name it, I most likely detest it. I have already ranted about the stupid way that HP has to allow you create the so-called recovery media; but this is nothing compared to the stupidity I encountered with a 2005 Toshiba laptop of a family friend.

Incidentally, my Dell Latitude came with a standard Windows 7 install DVD, and a separate one with software and drivers… thanks Dell!

First of all, the (valid) XP Home license it’s provided with only works with the Toshiba media, as it’s a System-Locked Preinstallation — basically one huge lock-in that forces you to use the original CD (or DVD) the computer comes with to restore, even though you could easily install from a standard (vanilla) CD or a slipstreamed one. Unfortunately, the Toshiba media in this case wasn’t a complex set of CDs that need to be installed in a given order or anything like that; rather it is a single, huge DVD with a Norton Ghost image of the already-installed system. Oh well.

The problem is that the “preinstall” does not come with simply Windows and a few installed Toshiba-branded components, but rather with a truckload of content of all sorts: Norton Internet Security, Adobe Reader 7, Macromedia Flash (yes, Macromedia, and it’s not even uninstallable because the MSI that you’d have to use to run the uninstall is not present in the system)… just what you need to get a headache that you won’t forget for a month or so. About an hour after completing the imaging of the “preinstalled” system, I was able to get the system in a clean enough state so that I could actually start installing the real stuff that is needed.

I got to say, this actually makes me think something good of Microsoft as, for what I know, they stopped using multiple media on the 6.x series (which, I remind you includes Windows 7), and instead use the product code itself to decide what to install on the system.

What actually depresses me quite a while, though, is what one of my most recent customers told me that they only buy brand-name PCs because that way they “risk less incompatibilities between hardware”… considering that I had an HP computer refused to work at all with its recovery media and worked fine when installed manually, a couple of Dell boxes that that, while having the same model number required half-different installs, one with a Radeon card that doesn’t work with the official ATI drivers and require instead the Dell-provided ones, and requires the monitor to be installed with its own drivers otherwise it fails to reach the correct resolution…

Well, in the past three years I built around ten computes that eventually went to run Windows all the time; beside some idiotic hardware manufacturers with captcha-counter-captcha and interminable license agreements, I really didn’t have much trouble; okay so once or twice things didn’t work just smoothly until I updated the BIOS, but that’s not something that brand-name computers are absolutely exempt from. For instance – and I hope it was just a mistake in lshw – on an Acer laptop I had to replace the harddrive on, only after updating the BIOS lshw shown me the data regarding the Level 2 cache; before it was only showing me the level 1 cache; both time I was booting from the same SysRescueCD USB key.

And then people tell me that Linux has bad hardware support… I’d say not; sure it’s picky… but what it’s picky about is mostly the stuff that brand name computers try to shove on you!

On Windows and hardware support

I have blogged before about my part-time job as an external support technician for small businesses and people with near to no computer skills to be able to deal with their systems. Actually, this encompasses almost all of my posts related to Windows since, while I have used it for development and test a few times, I definitely don’t enjoy using it, or even keeping it around at all.

While, as I said, I don’t enjoy using Windows at all, it is true that I have my own licenses of it; yes plural. One I bought for various kind of work, and was a pre-ordered Windows 7 Ultimate license (which I paid less than half what the running price in Italy was, thanks Amazon); the other was given to me with the infamous Dell laptop — getting Ubuntu instead of Windows on that laptop wasn’t a choice from the website, and the call-centre would have given me a sloppier laptop for a 20% extra of price, the “cost” of the license was negative then; not nice, but a fact of life.

The Ultimate license is tied to an installation within a virtual machine (alas, also paid for), while the other is on the laptop itself, mostly because I need(ed) it to update the system’s BIOS and the smartcard reader’s firmware. From time to time I feel enough energy to half-waste my time with it and look if Dell released some new update — I still hope in something that can make the fingerprint reader speak a standardish protocol that some Linux software can make use of. Last time I did that, a number of drivers were updated and I wanted to get them all at once; Dell allows you to use some software of theirs to do so, but it only works with Internet Explorer or Firefox, not Chrome (which is my browser of choice nowadays). Oh well.

As usual for BTO computers, the Dell page provides a few options for drivers that aren’t really present on the unit itself; contrarily with my experience with other vendors (more on that in a moment), Dell only provided a few, and all that they provided was a “not there” option, rather than a “one out of many” option. All in all, it wasn’t as painful as other rebuilds I ran.

On the other hand, yesterday I picked up an HP laptop, not for a job but for a friend of mine, or rather for my friend’s mother. It’s a pretty recent laptop, it’s as powerful as my own Dell (same CPU, much bigger hard disk, ATI graphics card — this is the only thing I envy of that laptop), and came with Windows 7 Home Premium 64-bit preinstalled. Unfortunately, since it was bought, a number of problems came up.

The one reason why I was asked to look at it was that they were unable to install Windows Live Messenger on that system. I wouldn’t have minded suggesting use of Pidgin – I did convert a number of friends to that, and thanks to GTalk & Facebook a few have totally abandoned the Microsoft messenger network – the problem came down to be quite more complex: Windows Update just didn’t want to work. At a first glance the system seemed to be infected by some virus of sorts.

Interestingly enough, they didn’t call me first; I was actually the third choice; the HP tech support was called first, and they insisted that a recovery (nicer word for “format and reinstall”) was needed; a local shop tech was called as well, he insisted on installing a number of other software on the box, including Nero and the Kaspersky antivirus, but didn’t solve the problem, suggesting instead a downgrade to XP (which is silly nowadays, as much as you might find yourself more comfortable with that, running XP in 2010 on a 64-bit capable multicore machine is asking for trouble).

Anyway, I agreed with the HP techs that a recovery was needed, and took the laptop home for backing it up before install, but not before asking whether they or the techs created the recovery media.

While Dell provided me with old school DVDs with the operating system and the drivers, HP ships their laptop with a recovery partition on the disk, and some tools to eventually create recovery media. When I first tried to get a Linux-running laptop rather than the MacBook Pro I was using since 2005 or so, I got a Compaq cheap-o laptop with Vista, so I knew the drill: the recovery media was either in a silly number of CDs or in two DVDs, it could be created once only, as afterwards the software stopped providing the option. Of course nothing stopped me at the time from making a copy of the two DVDs in form of ISOs and burning another pair of them — still wondering who in which department of HP came up with such a backwards idea.

Funnily enough, the techs didn’t create a recovery media set because “there’s the partition, that’s better”, which is a lame excuse for “couldn’t be arsed” — not only the partition can be infected because it’s accessible by the standard operating system, but also you cannot rely on it if your harddrive melts and you have it replaced. As you might have guessed already, HP sells the recovery media themselves. Okay, I doublecheck that the software still allows me to proceed and yep; that means that the techs are really being incompetent and not malicious.

I bring the laptop home, and first thing first I start up a SysRescueCD USB key to check for viruses — I’m not sure, maybe SysRescueCD upstream is reading what I write, or my mind, since they added the iSCSI Enterprise Target package in their latest releases, which allows me to export the partitions as a whole to read with the virtual machine (much faster and more reliable than using ntfs-3g and Samba). In a situation that I really don’t like, the scan leads to no virus identified, at all, with an up-to-date definitions database. Oh well, it’s backup time by now.

After this is also done, I try to build the recovery media; HP gotta be kidding me because this time the options are an unknown (to me) number of CDs or six DVDs. Yes, 6, 4.7GB disks, over 25GB of recovery, sheesh! They sure bundle a lot of software in their recovery. I struggle to find as many empty DVDs at home (I haven’t bought any in years, and no, you cannot use a DVD-RW, copy the ISO and blank it again, they disallow DVD-RW media, as well as dual-layer DVDs), and try using those. To my surprise, it refused to write to them, at all. Okay, I supposed that whatever messed with the system enough to disable Windows Update could have disabled the DVD writer as well, so I decide to try again after the recovery; I ran the recovery from the partition.

Interesting fact: HP allows you to execute either a full manufacturer recovery or a minimal recovery; the latter should be pretty much vanilla, so I chose that. After the install, hell itself came to me. First of all, “minimal” doesn’t really mean vanilla; HP still install their recovery manager (of course), all the drivers and some Windows updates. But that’s bearable.

The bad part started to show itself when Windows Update still wasn’t working; and the same went for the recovery manager, it failed to write to the DVDs. Trying to debug the issue, I noticed two even more upsetting problems: the first was trying to change the network location classification from Public to Work, which threw up a UAC window claiming that “an unknown program” tried to change the system configuration (the Control Panel is an unknown program now?); the second was the Event Viewer applet being totally f♥♥ked up: UAC complained that mmc.exe wasn’t signed (what?), and it didn’t show any message of any kind at all.

Cursing HP loudly, plan B was the only choice. I got a Windows 7 Home Premium 64-bit DVD and decided to install a clean, vanilla copy of Windows, and deal with the drivers’ mess later. This actually worked pretty well up to now. The only inconvenience is that Windows refused to active itself online, with the original HP product key, requiring me to call the automatic phone service (which is no longer toll-free calling from cellphones; but it was still better than typing 54 – fifty-four – numbers by hand).

It was definitely funny, though, to note the difference between Dell and HP: the former has the product’s labels visible underneath the laptop, and the Windows license sticker hidden underneath the battery; the latter has it swapped, with the license visible, and all the product’s information requiring the removal of the battery to be read.

Anyway, while the system was installing first and updating afterwards (I don’t think I’d ever expected myself to rejoice so much seeing Windows Update working!) I got the model number and went to look for the drivers on HP’s support page. And then I remembered why I hated working with HP laptops this much.

The page for the exact product name is showing me drivers for the Intel Turbo Boost technology (not supported by this laptop), both ATI and Intel graphics card (it only has the ATI), a driver for Realtek card reader (which doesn’t seem to be present?), two Bluetooth drivers (of which at least one doesn’t work!) and three wireless adapter drivers (Broadcom, Atheros and Intel). Interestingly enough, Windows 7 already detects both the wired and wireless network cards upon install; their drivers, as well as the Synaptic touchpad drivers, all of which are present in the product’s driver page (after you have already chosen Windows 7 as operating system) are downloaded through Windows Update and need not to be installed through the HP packages at all.

At the end of the day I downloaded 117MB of bad (unused) drivers, and 288MB of unneeded or obsolete drivers (including a 200MB worth of ATI drivers, where the original Catalyst is 70MB in size).

I have to say that even though not everything work as it is, my Dell has shown a much friendlier approach both in terms of user experience with Windows, and friendliness in configuration, so in this, I really can’t blame them. I’m just afraid this won’t be the last HP laptop I’ll have to work on.

Gource visualising feng’s history — A story of Radeon

Or see this on YouTube — and yes this is quite ironic that we’ve uploaded to YouTube the visualised history of a streaming software stack.

The video you can see here is the GIT history of feng the RTSP streaming server me and Luca are working on for the LScube project, previously founded by the Turin Politechnic, visualised through Gource.

While this video has some insights about feng itself, which I’ll discuss on the mailing list of the project soon enough, I’m using this to bring home another point, one even more important I think. You probably remember my problems with ATI HD4350 video card … well, one of the reasons why I didn’t post this video before, even though Gource has been in tree (thanks to Enrico) for a while already, is that it didn’t work too well on my system.

It might not be too obvious, but the way Gource work is by using SDL (and thus OpenGL) to render the visualisation to screen and to (PPM) images – the video is then produced by FFmpeg that takes the sequence of PPM and encodes it in H.264 with x264; no I’m not going to do this with Theora – so you rely on your OpenGL support to produce good results. When 0.24 was being worked on (last January) the r700 Radeon driver, with KMS, had some trouble, and you’d see a full orange or purple frame from time to time, resulting in a not-too-appealing video. Yesterday I bit the bullet, and after dmesg has shown me a request from the kernel to update my userland, I rebuilt the Radeon driver from GIT, and Mesa from the 7.8 branch…

Perfect!

No crashes, no artefacts on glxgears, and no artefacts on Gource either! As you can see from the video above. This is with kernel 2.6.33 vanilla, Mesa 7.8 GIT and Radeon GIT, all with KMS enabled (and the framebuffers work as well!). Kudos to Dave, and all the developers working on Radeon, this is what I call good Free Software!

Just do yourself a favour, and don’t buy videocards with fans… leaving alone nVidia’s screwup with the drivers, all of them failed on me at some point, passive cards instead seem to work much longer, probably because of the lack of moving parts.

What the heck is up with hardware drivers download?

Today I’m fixing up yet another streamlined Windows XP CD for a friend of mine (it’s an original Windows as usual).

I have already wondered about some stuff with Windows drivers, but today it seems like stuff became even more hellish.

First, VIA stopped providing drives on Via Arena, now provide them on their site, and most importantly the download area does not work with Firefox; so I had to use Internet Explorer to download them. Way to go VIA!

Second, when I go to Asus website to download the driver for he motherboard, I’m given a captcha to complete. To download a frigging driver!

What the heck!

Driver hell — when will it stop?

To get some extra pocket money to spend in the everyday maintenance of my systems, I also ended up working on maintenance of Windows computers on a daily basis; it’s not extraordinarily bad, and it usually doesn’t take me more than a day for a single computer even if it’s the first time I see it (once I’ve seen it once, I already know what to expect).

Unfortunately, it’s not always feasible to convert people to Linux yet; although I think I might start soon enough at least with a few people whose only use of a computer is to “browse websites, send email, watch a movie from time to time”. To make the task easier I obviously set up systems with Firefox and Thunderbird, VLC and OpenOffice, so that at least some programs can be found on the ”new“ systems when they migrate.

Unfortunately, it seems like Windows, especially Windows XP, a lot of my customers have OEM licenses for, has become a driver hell just like it was in the old days. And vendors don’t seem to make that much easier. Most vendors providing complete systems tend not to care about their users enough to provide downloads for the drivers (they just tell you to use their recovery partition; guess what? that stuff often doesn’t work extremely well, if at all, and in one instance it was even mounted as a drive on the normal OS… which meant it was infected too!), and the components’ manufacturer have websites that calling complex would be euphemistic:

  • ATI/AMD website is a mess to navigate; while they do (or did) chipsets too, you cannot really find a “chipset drivers” section; if you have an older version of a motherboard that is supported by legacy drivers you’ve got to navigate at least four pages before you can find out!
  • Asus website is a mess of javascript; whenever you ask to download something you have to tell them the operating system you’re looking for… – even for BIOS updates – the window is centered on the screen and does not work on cellphones, and of course once I could have used a cellphone just fine if it wasn’t for that (given that Asus boards usually can update the bios through USB sticks); no matter that half the time, whatever operating system you select, the same stuff is given you;
  • Intel website is also a labyrinth; to download some driver you got to search for the right class of software, then decide you got one in particular, and it often proposes you two options, then you have to agree to the license and again click download… that does not download the thing but rather redirects you to a page that calls a javascript to download the file; such javascript can sometimes not work at all, so they provide you with the usual ”if the file does not download, click here“; but rather than being a direct link, it’s also a javascript function; checking the function, it lists a clear bouncer link (which you could download with wget, too!), but with a little more presence of mind, you can notice that the link is _provided as a GET-dparameter to the (dynamic, at this point) page on Intel’s server; much easier to copy that out and drop the rest I’d say;
  • Realtek’s website sometimes does not work properly; on the other hand they give you direct FTP links so once you know the FTP server you can find the drivers just fine avoiding the website; would have been nicer to split it down for driver type so that the listing wouldn’t take a few minutes, but I have to say is the system that works better; even if FTP does make me feel like we’re back in the early ‘90s;
  • almost all download sites tend to have pretty slow connections, or capped connections; I can understand Asus, Gigabyte and Realtek that have their main server in Taiwan or so it would seem, but what about Intel? Luckily at least ATI and nVidia (that have the biggest driver packs) have very fast servers.

Then there are other problems like trying to understand that ”ATI Technologies, Inc. SBx00 Azalia” is actually the name reported by lspci for a Realtek Azalia coded that needs the HDA drivers from Realtek; or trying to guess the driver version, or the driver’s name, from the downloaded files, that often enough don’t have any kind of naming or versioning scheme. Again ATI (for quite a long time) and nVidia (recently) solved this in a pretty nice way: thei use their logo for the install executable; this does not make it very manageable under Linux though, given that nautilus doesn’t show (yet) the PE icon (maybe I can modify it to load the PE file, and extract the icon?).

Let’s just hope that Microsoft’s moves with Vista and Windows 7 will be a trampoline for Linux for the masses; I sincerely count more on Microsoft’s changes than Google OS as I’ve noted since Vista already gave us something useful for Linux.

No more WD for me, I’m afraid

So I finally went to ge tth enew disks I ordered, or rather I sent my sister since I’m at home sick again (seems like my health hasn’t recovered fully yet). I ordered two WD SATA disks, two Samsung SATA disks and an external WD MyBook Studio box with two disks, with USB, FireWire and eSATA interfaces. My idea was to vary around the type and brand of disks I use so that I don’t end up having problems when exactly one of them goes crazy, like it happened with Seagate’s recent debacle.

The bad surprise started when I tried to set up the MyBook; I wanted to set it up as RAID1, to store my whole audio/video library (music, podcasts, audiobooks, tv series and so on so forth), then re-use the space that is now filled with the multimedia stuff to store the archive of downloaded software (mostly Windows software, which is what I use to set up Windows systems, something that I unfortunately still do), ISO files (for various Windows versions, LiveCDs and stuff like that), and similar. I noticed right away that contrary to the Iomega disk I had before, this disk does not have a physical hardware switch to enable raid0, raid1 or jbod. I was surprised and a bit appalled, but the whole marketing material suggests that the thing works fine with Mac OS X, so I just connected it to the laptop and looked for the management software (which is inside the disk itself, rather than on a different CD, that’s nice).

Unfortunately once the software was installed, it failed to install itself in the usual place for Applications under OSX, and it also failed to detect the disk itself. So I went online and checked the support site, there was an upate to both the firmware of the drive (which means the thing is quite more complex than I’d expect it to be) and to the management software. Unfortunately, neither solved my issue, so I decided it had to be a problem with Leopard, and thus I could try with my mother’s iBook which is still running Tiger, still no luck. Even installing the “turbo” drivers from WD solved the problem.

Now I’m stuck with a 1TB single-volume disk set which I don’t intend to use that way, I’ll probably ask a friend to lend me a Windows XP system to set it up, and then hope that I’ll never have to use it, but the thing upsets me. Sure from a purely external hardware side it seems quite nice, but the need for software to configure a few parameters, and the fact that there is no software to do so under Linux, really makes the thing ludicrous.