Packaging for the Yubikey NEO, a frustrating adventure

I have already posted a howto on how to set up the YubiKey NEO and YubiKey NEO-n for U2F, and I promised I would write a bit more on the adventure to get the software packaged in Gentoo.

You have to realize at first that my relationship with Yubico has not always being straightforward. I have at least once decided against working on the Yubico set of libraries in Gentoo because I could not get a hold of a device as I wanted to use it. But luckily now I was able to place an order with them (for some two thousands euro) and I have my devices.

But Yubico’s code is usually quite well written, and designed to be packaged much more easily than most other device-specific middleware, so I cannot complain too much. Indeed, they split and release separately different libraries with different goals, so that you don’t need to wait for enough magnitude to be pulled for them to make a new release. They also actively maintain their code in GitHub, and then push proper make dist releases on their website. They are in many ways a packager’s dream company.

But let’s get back to the devices themselves. The NEO and NEO-n come with three different interfaces: OTP (old-style YubiKey, just much longer keys), CCID (Smartcard interface) and U2F. By default the devices are configured as OTP only, which I find a bit strange to be honest. It is also the case that at the moment you cannot enable both U2F and OTP modes, I assume because there is a conflict on how the “touch” interaction behaves, indeed there is a touch-based interaction on the CCID mode that gets entirely disabled once enabling either of U2F or OTP, but the two can’t share.

What is not obvious from the website is that to enable U2F (or CCID) modes, you need to use yubikey-neo-manager, an open-source app that can reconfigure the basics of the Yubico device. So I had to package the app for Gentoo of course, together with its dependencies, which turned out to be two libraries (okay actually three, but the third one sys-auth/ykpers was already packaged in Gentoo — and actually originally committed by me with Brant proxy-maintaining it, the world is small, sometimes). It was not too bad but there were a few things that might be worth noting down.

First of all, I had to deal with dev-libs/hidapi that allows programmatic access to raw HID USB devices: the ebuild failed for me, both because it was not depending on udev, and because it was unable to find the libusb headers — turned out to be caused by bashisms in the configure.ac file, which became obvious as I moved to dash. I have now fixed the ebuild and sent a pull request upstream.

This was the only real hard part at first, since the rest of the ebuilds, for app-crypt/libykneomgr and app-crypt/yubikey-neo-manager were mostly straightforward — only I had to figure out how to install a Python package as I never did so before. It’s actually fun how distutils will error out with a violation of install paths if easy_install tries to bring in a non-installed package such as nose, way before the Portage sandbox triggers.

The problems started when trying to use the programs, doubly so because I don’t keep a copy of the Gentoo tree on the laptop, so I wrote the ebuilds on the headless server and then tried to run them on the actual hardware. First of all, you need to have access to the devices to be able to set them up; the libu2f-host package will install udev rules to allow the plugdev group access to the hidraw devices — but it also needed a pull request to fix them. I also added an alternative version of the rules for systemd users that does not rely on the group but rather uses the ACL support (I was surprised, I essentially suggested the same approach to replace pam_console years ago!)

Unfortunately that only works once the device is already set in U2F mode, which does not work when you’re setting up the NEO for the first time, so I originally set it up using kdesu. I have since decided that the better way is to use the udev rules I posted in my howto post.

After this, I switched off OTP, and enabled U2F and CCID interfaces on the device — and I couldn’t make it stick, the manager would keep telling me that the CCID interface was disabled, even though the USB descriptor properly called it “Yubikey NEO U2F+CCID”. It took me a while to figure out that the problem was in the app-crypt/ccid driver, and indeed the change log for the latest version points out support for specifically the U2F+CCID device.

I have updated the ebuilds afterwards, not only to depend on the right version of the CCID driver – the README for libykneomgr does tell you to install pcsc-lite but not about the CCID driver you need – but also to check for the HIDRAW kernel driver, as otherwise you won’t be able to either configure or use the U2F device for non-Google domains.

Now there is one more part of the story that needs to be told, but in a different post: getting GnuPG to work with the OpenPGP applet on the NEO-n. It was not as straightforward as it could have been and it did lead to disappointment. I’ll be a good post for next week.

Setting up Yubikey NEO and U2F on Gentoo (and Linux in general)

When the Google Online Security blog announced earlier this week the general availability of Security Key, everybody at the office was thrilled, as we’ve been waiting for the day for a while. I’ve been using this for a while already, and my hope is for it to be easy enough for my mother and my sister, as well as my friends, to start using it.

While the promise is for a hassle-free second factor authenticator, it turns out it might not be as simple as originally intended, at least on Linux, at least right now.

Let’s start with the hardware, as there are four different options of hardware that you can choose from:

  • Yubico FIDO U2F which is a simple option only supporting the U2F protocol, no configuration needed;
  • Plug-up FIDO U2F which is a cheaper alternative for the same features — I have not witnessed whether it is as sturdy as the Yubico one, so I can’t vouch for it;
  • Yubikey NEO which provides multiple interface, including OTP (not usable together with U2F), OpenPGP and NFC;
  • Yubikey NEO-n the same as above, without NFC, and in a very tiny form factor designed to be left semi-permanently in a computer or laptop.

I got the NEO, but mostly to be used with LastPass ­– the NFC support allows you to have 2FA on the phone without having to type it back from a computer – and a NEO-n to leave installed on one of my computers. I already had a NEO from work to use as well. The NEO requires configuration, so I’ll get back at it in a moment.

The U2F devices are accessible via hidraw, a driverless access protocol for USB devices, originally intended for devices such as keyboards and mice but also leveraged by UPSes. What happen though is that you need access to the device, that the Linux kernel will make by default accessible only by root, for good reasons.

To make the device accessible to you, the user actually at the keyboard of the computer, you have to use udev rules, and those are, as always, not straightforward. My personal hacky choice is to make all the Yubico devices accessible — the main reason being that I don’t know all of the compatible USB Product IDs, as some of them are not really available to buy but come from instance from developer mode devices that I may or may not end up using.

If you’re using systemd with device ACLs (in Gentoo, that would be sys-apps/systemd with acl USE flag enabled), you can do it with a file as follows:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", TAG+="uaccess"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", TAG+="uaccess"

If you’re not using systemd or ACLs, you can use the plugdev group and instead do it this way:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", GROUP="plugdev", MODE="0660"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", GROUP="plugdev", MODE="0660"

-These rules do not include support for the Plug-up because I have no idea what their VID/PID pairs are, I asked Janne who got one so I can amend this later.- Edit: added the rules for the Plug-up device. Cute their use of f1d0 as device id.

Also note that there are properly less hacky solutions to get the ownership of the devices right, but I’ll leave it to the systemd devs to figure out how to include in the default ruleset.

These rules will not only allow your user to access /dev/hidraw0 but also to the /dev/bus/usb/* devices. This is intentional: Chrome (and Chromium, the open-source version works as well) use the U2F devices in two different modes: one is through a built-in extension that works with Google assets, and it accesses the low-level device as /dev/bus/usb/*, the other is through a Chrome extension which uses /dev/hidraw* and is meant to be used by all websites. The latter is the actually standardized specification and how you’re supposed to use it right now. I don’t know if the former workflow is going to be deprecated at some point, but I wouldn’t be surprised.

For those like me who bought the NEO devices, you’ll have to enable the U2F mode — while Yubico provides the linked step-by-step guide, it was not really completely correct for me on Gentoo, but it should be less complicated now: I packaged the app-crypt/yubikey-neo-manager app, which already brings in all the necessary software, including the latest version of app-crypt/ccid required to use the CCID interface on U2F-enabled NEOs. And if you already created the udev rules file as I noted above, it’ll work without you using root privileges. Just remember that if you are interested in the OpenPGP support you’ll need the pcscd service (it should auto-start with both OpenRC and systemd anyway).

I’ll recount separately the issues with packaging the software. In the mean time make sure you keep your accounts safe, and let’s all hope that more sites will start protecting your accounts with U2F — I’ll also write a separate opinion piece on why U2F is important and why it is better than OTP, this is just meant as documentation, howto set up the U2F devices on your Linux systems.

Predictable persistently (non-)mnemonic names

This is part two of a series of articles looking into the new udev “predictable” names. Part one is here and talks about the path-based names.

As Steve also asked on the comments from last post, isn’t it possible to just use the MAC address of an interface to point at it? Sure it’s possible! You just need to enable the mac-based name generator. But what does that mean? It means that your new interface names will be enx0026b9d7bf1f and wlx0023148f1cc8 — do you see yourself typing them?

Myself, I’m not going to type them. My favourite suggestion to solve the issue is to rely on rules similar to the previous persistent naming, but not re-using the eth prefix to avoid collisions (which will no longer be resolved by future versions of udev). I instead use the names wan0 and lan0 (and so on), when the two interfaces sit straddling between a private and a public network. How do I achieve that? Simple:

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:17:31:c6:4a:ca", NAME="lan0"
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:07:e9:12:07:36", NAME="wan0"

Yes these simple rules are doing all the work you need if you just want to make sure not to mix the two interfaces by mistake. If your server or vserver only has one interface, and you want to have it as wan0 no matter what its mac address is (easier to clone, for instance), then you can go for

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="*", NAME="wan0"

As long as you only have a single network interface, that will work just fine. For those who use Puppet, I also published a module that you can use to create the file, and ensure that the other methods to achieve “sticky” names are not present.

My reasoning to actually using this kind of names is relatively simple: the rare places where I do need to specify the interface name are usually in ACLs, the firewall, and so on. In these, the most important part to me is knowing whether the interface is public or not, so the wan/lan distinction is the most useful. I don’t intend trying to remember whether enp5s24k1f345totheright4nextothebaker is the public or private interface.

Speaking about which, one of the things that appears obvious even from Lennart’s comment to the previous post, is that there is no real assurance that the names are set in stone — he says that an udev upgrade won’t change them, but I guess most people would be sceptic, remembering the track record that udev and systemd has had over the past few months alone. In this situation my personal, informed opinion is that all this work on “predictable” names is a huge waste of time for almost everybody.

If you do care about stable interface names, you most definitely expect them to be more meaningful than 10-digits strings of paths or mac addresses, so you almost certainly want to go through with custom naming, so that at least you attach some sense into the names themselves.

On the other hand, if you do not care about interface names themselves, for instance because instead of running commands or scripts, you just use NetworkManager… well what the heck are you doing playing around with paths? If it doesn’t bother you that the interface for an USB device changes considerably between one port and another, how can it matter to you whether it’s called wwan0 or wwan123? And if the name of the interface does not matter to you, why are you spending useless time trying to get these “predictable” names working?

All in all, I think this is just an useless nice trick, that will only cause more headaches than it can possibly solve. Bahumbug!

Predictably non-persistent names

This is going to be fun. The Gentoo “udev team”, in the person of Samuli – who seems to suffer from 0-day bump syndrome – decided to now enable by default the new predictable names feature that is supposed to make things so much nicer in Linux land where, especially for people coming from FreeBSD, things have been pretty much messed up. This replaces the old “persistent” names, that were often enough too fragile to work, as they did in-place renaming of interfaces, and would cause way too often conflicts at boot time, since swapping two devices’ names is not an atomic operation for obvious reasons.

So what’s this predictable name all around? Well, it’s mostly a merge of the previous persistent naming system, and the BIOS label naming project which was developed by RedHat for a few years already so that the names of interfaces for server hardware in the operating system match the documentation of said server, so that you can be sure that if you’re connecting the port marked with “1” on the chassis, out of four on the motherboard, it will bring up eth2.

But why were those two technologies needed? Let’s start first with explaining how (more or less) the kernel naming scheme works: unlike the BSD systems, where the interfaces are named after the kernel driver (en0, dc0, etc.), the Linux kernel uses generic names, mostly eth, wlan and wwan, and maybe a couple more, for tunnels and so on. This causes the first problem: if you have multiple devices of the same class (ethernet, wlan, wwan) coming from different drivers, the order of the interface may very well vary between reboots, either because of changes in the kernel, if the drivers are built-in, or simply because of locking and execution of modules load (which is much more common for binary distributions).

The reason why changes in the kernel can change the order is that the order in which drivers are initialized has changed before and might change again in the future. A driver could also decide to change the order with which its devices are initialized (PCI tree scanning order, PCI ID order, MAC address order, …) and so on, causing it to change the order of interfaces even for the same driver. More about this later.

But here’s my first doubt arises: how common is for people to have more than one interface of the same class from vendors different enough to use different drivers? Well it depends on the class of device; on a laptop you’d have to search hard for a model with more than one Ethernet or wireless interface, unless you add an ExpressCard or PCMCIA expansion card (and even those are not that common). On a desktop, I’ve seen a few very recent motherboards with more than one network port, and I have yet to see one with different chips for the two. Servers, that’s a different story.

Indeed, it’s not that uncommon to have multiple on-board and expansion card ports on a server. For instance you could use the two onboard ports as public and private interfaces for the host… and then add a 4-port card to split between virtual machines. In this situation, having a persistent naming of the interfaces is indeed something you would be glad of. How can you tell which one of eth{0..5} is your onboard port #2, otherwise? This would be problem number two.

Another situation in which having a persistent naming of interfaces is almost a requirement is if you’re setting up a router: you definitely don’t want to switch the LAN and WAN interface names around, especially where the firewall is involved.

This background is why the persistent-net rules were devised quite a few years ago for udev. Unfortunately almost everybody got at least one nasty experience with them. Sometimes the in-place rename would fail, and you’d end up with the temporary names at the end of boot. In a few cases the name was not persistent at all: if the kernel driver for the device would change, or change name at least, the rules wouldn’t match and your eth0 would become eth1 (this was the case when Intel split the e1000 and e1000e drivers, but it’s definitely more common with wireless drivers, especially if they move from staging to main).

So the old persistent net rules were flawed. What about the new predictable rules? Well, not only they combined the BIOS naming scheme (which is actually awesome when it works — SuperMicro servers such as Excelsior do not expose the label; my Dell laptop only exposes a label for the Ethernet port but doesn’t for either the wireless adapter or the 3G one), but it has two “fallbacks” that are supposed to be used when the labels fail, one based on the MAC address of the interface, and the other based on the “path” — which for most PCI, PCI-E, onboard, ExpressCard ports is basically the PCI address; for USB… we’ll see in a moment.

So let’s see, from my laptop:

# lspci | grep 'Network controller'
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6200 (rev 35)
# ifconfig | grep wlp3
wlp3s0: flags=4163  mtu 1500

Why “wlp3s0”? It’s the Wireless adapter (wl) PCI (p) card at bus 3, slot 0 (s0): 03:00.0. Matches lspci properly. But let’s see the WWAN interface on the same laptop:

# ifconfig -a | grep ww
wwp0s29u1u6i6: flags=4098  mtu 1500

Much longer name! What’s going on then? Let’s see, it’s reporting it’s card at bus 0, slot 29 (0x1d) — lspci will use hexadecimal numbers for the addresses:

# lspci | grep '00:1d'
00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)

Okay so it’s an USB device, even though the physical form factor is a mini-PCIE card. It’s common. Does it match lsusb?

# lsusb | grep Broadband
Bus 002 Device 004: ID 413c:8184 Dell Computer Corp. F3607gw v2 Mobile Broadband Module

Not the Bus/Device specification there, which is good: the device number will increase every time you pop something in/out of the port, so it’s not persistent across reboots at all. What it uses is the path to the device standing by USB ports, which is a tad more complex, but basically means it matches /sys/bus/usb/devices/2-1.6:1.6/ (I don’t pretend to know how the thing works exactly, but it describe to which physical port the device is connected).

In my laptop’s case, the situation is actually quite nice: I cannot move either the WLAN or WWAN device on a different slot so the name assigned by the slot is persistent as well as predictable. But what if you’re on a desktop with an add-on WLAN card? What happens if you decide to change your video card, with a more powerful one that occupies the space of two slots, one of which happen to be the place where you WLAN card is? You move it, reboot and .. you just changed the interface name! If you’ve been using Network Manager, you’ll just have to reconfigure the network I suppose.

Let’s take a different example. My laptop, with its integrated WWAN card, is a rare example; most people I know use USB “keys”, as the providers give them away for free, at least in Italy. I happen to have one as well, so let me try to plug it in one of the ports of my laptop:

# lsusb | grep modem
Bus 002 Device 014: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u2i1: flags=4098  mtu 1500
wwp0s29u1u6i6: flags=4098  mtu 1500

Okay great this is a different USB device, connected to the same USB controller as the onboard one, but at different ports, neat. Now, what if I had all my usual ports busy, and I decided to connect it to the USB3 add-on ExpressCard I got on the laptop?

# lsusb | grep modem
Bus 003 Device 004: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u6i6: flags=4098  mtu 1500
wws1u1i1: flags=4098  mtu 1500

What’s this? Well, the USB3 controller provides slot information, so udev magically uses that to rename the interface, so it avoids using the otherwise longer wwp6s0u1i1 name (the USB3 controller is on the PCI bus 6).

Let’s go back to the on-board ports:

# lsusb | grep modem
Bus 002 Device 016: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u3i1: flags=4098  mtu 1500
wwp0s29u1u6i6: flags=4098  mtu 1500

Seems the same, but it’s not. Now it’s u3 not u2. Why? I used a different port on the laptop. And the interface name changed. Yes, any port change will produce a different interface name, predictably. But what happens if the kernel decides to change the way the ports are enumerated? What happens if the USB 2 driver is buggy and is supposed to provide slot information, and they fix it? You got it, even in these cases, the interface names are changed.

I’m not saying that the kernel naming scheme is perfect. But if you’re expected to always only have an Ethernet port, a WLAN card and a WWAN USB stick, with it you’ll be sure to have eth0, wlan0 and wwan0, as long as the drivers are not completely broken as they are now (like if the WLAN is appearing as eth1), and as long as you don’t muck with the interface names in userspace.

Next up, I’ll talk about the MAC addresses based naming and my personal preference when setting up servers and routers. Have fun in the mean time figuring out what your interface names will be.

I’m doing it for you

Okay this is not going to be a very fun post to read, and the title can already make you think that I’m being an arrogant bastard this time around, but I got a feeling that lately people are missing the point that even being grumpy, I’m not usually grumpy just because, I’m usually grumpy because I’m trying to get things to improve rather than stagnate or get worse.

So let’s take an example right now. Thomáš postd about some of the changes that are to be expected on LibreOffice 4 — one of these is that the LDAP client libraries are no longer an optional dependency but have to be present. I wasn’t happy about that.

I actually stumbled across that just the other day when installing the new laptop: while installing KDE component with the default USE flags, OpenLDAP would have been installed. The reason is obviously that the ldap USE flag is enabled by default, which makes sense, as it’s (unfortunately) the most common “shared address book” database available. But why should I get an LDAP server if I selected explicitly a desktop profile?

So the first task at hand, was to make sure that the minimal USE flag was present on the package (it was), and if it did what was intended, i.e., not install the LDAP server — and that is the case indeed. Good, so we can install only the client libraries. Unfortunately the default dependencies were slightly wrong, with said USE flag, as some things like libtool (for libltdl) are only really used by the server components. This was easy to fix, together with a couple more fixes.

But as I proposed on the mailing list to change the defaults, for the desktop profile, to have the minimal USE flag enabled, hell broke loose — now the good point about it is that the minimal USE flag is definitely being over-used — and I’m afraid I’m at fault there as well, since both NRPE and NSCA have a minimal USE flag. I guess it’s time to reel back on that for me as well. And I now I have a patch to get openldap to gain a server USE flag, enabled by default – except, hopefully, on the desktop profile – to replace the old minimal flag. Incidentally looking into it I also found that said USE flag was actually clashing with the cxx one, for no good reason as far as I could tell. But Robin doesn’t even like the idea of going with a server USE flag for OpenLDAP!

On a different note, let’s take hwids — I originally created the package to reduce the amount of code our units’ firmware required, but while at it I ended up with a problematic file on my hands, as I wrote the oui.txt file downloaded from IEEE has been redistributed for a number of years, but when I contacted them to make sure I could redistribute it, they told me that it wasn’t possible. Unfortunately the new versions of systemd/udev use that file to generate some hardware database — finally implementing my suggestion from four years ago better late than never!

Well, I ended up having to take some flak, and some risk, and now the new hwids package fetches that file (as well as the iab.txt file) and also fully implements re-building the hardware database, so that we can keep it up to date from Portage, without having to get people to re-build their udev package over and over.

So, excuse me if I’m quite hard to work with sometimes, but the amount of crap I have to take when doing my best to make Gentoo better, for users and developers, is so high that sometimes I’d just like to say “screw it” and leave it to someone else to fix the mess. But I’m not doing that — if you don’t see me around much in the next few days, it’s because I’m leaving LA on Wednesday, and I can’t post on the blog while flying to New York (because the gogonet IP addresses are in virtually every possible blacklist, now and in the future —- so no way I can post to the blog, unless I figure out a way to set up a VPN and route traffic to my blog to said VPN …).

And believe it or not, but I do have other concerns in my life beside Gentoo.

pcsc-lite and the Gentoo diversion

You probably all know that I’m not really keen on diverging Gentoo from other distributions as long as it’s feasible, although I’m also always considering the idea of giving an extra edge to our distribution. Today, though, is one of the days when I have to implement something differently in Gentoo to have it working internally.

Today, Ludovic Rousseau released a new pcsc-lite version that is designed to improve autostart and increase safety and security by replacing a setuid-root with a setgit-pcscd. Together with that, a new ccid wrapper that sets the permission on USB devices via UDEV was released.

Now, while this looks all good stuff that improves user experience, this is mostly designed to solve issues with binary distributions — most likely, Ubuntu and derived. Autostart is mostly designed to avoid using a pcscd system service; in Gentoo that’s not much of a problem because it’s the user’s choice to start or not a init script, but on other distributions as soon as you install the package, the init script is scheduled to start. Once again, that’s not much of a problem when you install a server package, as that’s the whole point of it, but the pcscd service has to be bundled with the client library — and the client library is decided at build-time, so likely enabled for many packages, in those distributions. Again these aren’t enough of a concern for us, thanks to our customisable design.

On the other hand, the new design is troublesome for us: the daemons are started with the privileges of the current user, but access to the pcscd group; that would be okay if it wasn’t that it needs to create files in the /var/run/pcscd directory, which we cannot simply create in the ebuild – as /var/run could be on a tmpfs instance – and cannot simply be re-created by pcscd; it worked before because as setuid root it had all the privileges to do so. Ludovic suggested to create a reduced init script whose only task is to create the directory at startup, but at that point, why reducing to simply creating the directory?

The end result is as follows: the init script is updated, it creates the directory, alright, but now it executes the pcscd process under privileges of nobody/pcscd, rather than root, tightening security a little bit. More importantly, thanks to the fact that USB decices (and other hotpluggable devices) are handled through udev permissions, I’ve also created an extra rules file that hotplugs the service if a new device is added to the pcscd group, which gives a slightly better usability to Gentoo than before.

Unfortunately, this complicates the matter further, as versions of ccid and pcsc-lite need to go hand in hand, stabling them is a PITA; the fact that ifd-gempc is also not fixed yet really don’t make it any nicer. On the other hand, the presence of this now Gentoo-specific hotplug path through UDEV also mean that we could finally drop the HAL support in this ebuild, as that was supposed to provide the generic hotplug mean before.

I hope this situation will turn out good for everybody; if anything seems to be amiss, don’t hesitate to open a bug or tell me directly, and I’ll look into it.

Gentoo/PulseAudio Summer 2009 Plans

Against to avoid the problem of bus factor, I’m going to write down here what the plans are, for what concerns me, with PulseAudio and Gentoo for this end of Summer 2009, mostly related to what will happen when I’ll come back from my vacations in London, after mid August.

This actually is also out of candrews asking for it as I haven’t really thought about writing this before that.

So the first thing to say is that I am following PulseAudio pretty well; or rather I’m following Lennart pretty well (he’s also the one that suggested me to rewrite udev’s build system to use non-recursive automake — something I’ll write more about another day), so I’m not sleeping waiting.

Indeed, the 0.9.16 test releases are available in Gentoo already, although masked, and since recently they both support udev hotplug (preferring it over HAL), and also pass all the checks already. A note on the tests is needed though: the mix-test lacks a few entries, in particular regarding 24-in-32-bit samples, and is for this reason disabled in the current ebuild (Lennart should be working on it); at the same time, the ebuild is running test specifically in the source directory, because the intltool checks fail; badly. In theory the problem should be fixed in 0.41 series of intltool, but I am unsure whether that should be packaged or not by us.

In the next release, whether it’ll be another test release or the final release, there will also be a few differences in the handling of audio APIs. The OSS support will be restricted, masking the USE flag on Linux (leaving it enabled for FreeBSD obviously); this means that users wanting to use stuff like OSS4, which is not in Portage and if it’s for me will never be, will have to go a slightly longer way to get it to work with PulseAudio. The reason for this is that Lennart really don’t want to support that, and I can agree with him. Now, if you know the package well, you’ll probably be wondering “what about the OSS-compatibility wrapper?” this is solved already: in GIT the OSS output and wrapper supports are split in two different options, the former will be tied to the oss USE flag, the latter will be left in “auto-mode”, which will create the padsp rapper on all Gentoo Linux and FreeBSD systems. And this should fix your problem Luca!

As for some of the new features, like for instance Rygel UPnP support, well, I’ll probably be working on the sometime in the future; I do want to get Rygel in portage, especially if that will allow me to look at my vacation’s photos directly on my Sony Bravia networked TV.

On patching and security issues

Jeff, I think your concerns are pretty well real. The problem here though is not that Debian users should not be suggested not to file bugs upstream, the problem is that Debian should not go out of their way to patch stuff around.

Of course this is not entirely Debian’s fault, there are a few projects for which dealing with upstream is a tremendous waste of time of cosmic proportions, as they ignore distributors, think that their needs are totally bogus and stuff like that. Now, not all projects are like that of course. Projects like Amarok are quite friendly with downstream (to the point all the patches that are in Gentoo, added by me at least, were committed at the same time on the SVN), and most of the projects that you can find not suiting any distribution are most likely not knowing what the distributors need.

I did write about this in the past, and you can find my ideas on the “Distribution-friendly Projects” article, published on LWN (part 1, part 2 and part 3). I do suggest the read of that to anybody who has an upstream project, and would like to know what distributors need.

But the problem here is that Debian is known for patching the blood out of a project to adapt it to their needs. Sometimes this is good, as they take a totally distribution-unfriendly package into a decent one, sometimes it’s very bad.

You can find a few good uses of Debian’s patches in Portage, it’s not uncommon for a patched project to be used. On the other hand, you can think of at least two failures that, at least for me, shown the way Debian can easily fail:

  • a not-so-commonly known failure in autotoolising metamail, a dependency of hylafax that I tried to run on FreeBSD before. They did use autoconf and automake, but they made them so that they only work under Linux, proving they don’t know autotools that well;
  • the EPIC FAIL of the OpenSSL security bug; where people wanted to fix a problem with Valgrind, not knowing valgrind (if you have ever looked at valgrind docs, there is a good reference about suppression files, rather than patching code you don’t understand either).

Now this of course means nothing, of course even in Gentoo there has been good patches and bad patches; I have yet to see an EPIC FAIL like the OpenSSL debacle, but you never know.

The problem lies in the fact that Debian also seem to keep an “holier than thou” attitude toward any kind of collaboration, as you can easily notice in Marco d’Itri’s statements regarding udev rules (see this LWN article). I know a few Debian developers who are really nice guys whom I love to work with (like Reinhard Tartler who packages xine and Russel Coker whose blog I love to follow, for both technical posts and “green” posts; but not limited to), but for other Debian developers to behave like d’Itri is far from unheard of, and actually not uncommon either.

I’m afraid that the good in Debian is being contaminated by people like these, and by the attitude of trusting no one but themselves in every issue. And I’m sorry to see that because Debian was my distribution of choice when I started using Linux seriously.

Setting up a dual-seat system

So I was unable to get all the three monitors working on a single X instance. The 4260 size was too much for X to handle properly, so I decided that the best way to handle this is to get two seats working.

Using a multi-seat system seems like a totally nerd thing, but I would think that with modern multicore CPUs a multiseat system might replace well two or more boxes, if configured properly, in offices and school. But I’ll see to dig deeper into that when I have tried using it for a while.

It might be wroth writing down a few comments about the way I ended up with the video card I’m running now: my 3D Blaster Banshee had the display totally corrupt, I’m still not sure if it’s the video card being broken or an incompatibility between it and the ATi AGP card; the Matrox Mystique didn’t arrive to 1280×1024, I was able to get it at 1024×768 but it’s a bit… too tiny for me, there was not enough video ram (2MB); I ended up trying with an S3 ViRGE just because it looked packed with video ram, and seems like I was right, it supports 1280×1024 just fine… if it wasn’t for the refresh rate. I do hope the problem is with the “ugly pattern” we all know from Xorg, I’ll check better later to see if it works fine with dwm, if it doesn’t, I’ll try to see how it works at 256 colours….

It is strange to see how it’s difficult to find modern mainboards with more than one PCI-E x16 slot, and 16GB of RAM. I decided to get a quad-core for my next box with 16GB of RAM, with the current jobs I’m taking it should be possible for me to get it before June, but I’ll still have to deal with PCI video cards, for that reason (as for why 16GB of RAM… should make it less a pain to deal with repeated compilation of C++ code… and I want to use a few virtual machines); as far as I can see there’s no way to get, with Xorg, two seats working on the same videocard.

By the way, if you follow the Wiki, then you’ll probably see it does not work properly: evdev 1.1 does not respect Phys, evdev 1.2 does not open the devices at all, evdev from GIT works better, but you have to specify the event device to use (nasty, but works perfectly fine as you can symlink stuff around with udev).

Talking about udev, the default /dev/input/by-id symlink are completely useless in my system. The reason is quite easy: I have an Apple Aluminium keyboard (I can’t find anything better to write on!), a Logitech LX700 Cordless Desktop, and since yesterday an MX Revolution mouse; each of these peripherals creates two input device in the kernel (don’t ask me why): the first has one device for the basic keys, plus an extra one for extended keys (like fn); the second has a device for the standard keyboard, plus multimedia keys, and one for the mouse, plus all the extra keys as mouse’s buttons; the third has one device for the mouse, plus one for the extra buttons).

As /dev/input/by-id uses the name and the type of the devices, the Apple keyboard overwrites the symlink by itself, as it obviously has the same name, and has two keyboard devices. The Logitech peripherals instead work quite nicely if they are alone, but as both of them have as internal USB name “Logitech Receiver”, and both have one keyboard and one mouse, … I leave to you guess what happens.

See Greg? This is where usb.ids comes out useful ;)

Anyway, later on today I’ll blog more about the dual-seat system, right now I have some documentation to update and some work to do for my job ;)

ALSA and disposable power tools

No they are not related, unfortunately.

So let’s start with last night’s changes in ALSA, in particualr to alsa-firmware ebuild: the ebuild now looks at the ALSA_CARDS settings, as alsa-driver does, to decide which firmwares to install. Doing this not only reduces the size of the installed data, avoiding installing firmwares for cards you don’t have or don’t care about (you might just have one single card and just want firmware for that one), but also has the nice extra of allowing me to depend on the package needed to load the firmware, that means alsa-tools for hdsploader and fwload for usb-usx2y.

Unfortunately for this reason, I’m now waiting for sparc and ppc64 to re-add their keywords to the new ebuild, but it’s no big deal.

The problems here are that I admit I haven’t looked up too deeply to remove the remaining firmwares that are not bound to an useflag already. Another thing I’m considering to do is to make hdsp tools in alsa-tools be compiled only if ALSA_CARDS has hdsp or hdspm enabled.

This should cover to for the “feature removal” that I already applied to that ebuild by removing the (hidden) ALSA_TOOLS trick, that allowed to reduce the number of tools that were built. If somebody can tell me what he thinks that would be nice.

There are still bugs open, especially for the init.d script that should be smarter when saving settings and when unloading modules, and there is still the interaction with udev that has to be cleared up. What I’m thinking of right now is to add a switch to conf.d/alsasound so that the user can choose if handling the coldplug with udev or not, in the case of udev, starting the init script would simply call udev to re-run its coldplug. For the loading and storing of state, I’d just write an external script to handle that, so that udev can also call that when inserting modules; maybe an eselect module (but I would feel in a paradox considering that there’s nothing to select).

I hate ALSA so much, but as I’m alone working on it, I have to spend time thinking about this :/

On the other topic, my battery-powered drill started losing points as the battery (as all the batteries of cheap battery powered drills) started to give up. Well, for something I paid about €14.90 four years ago, it’s not a bad time. Luckily yesterday I found a new drill, virtually identical (same colour, same style, 18V versus 14.4V) with two batteries this time (which is quite good as you can always have one charged and the other charging) at €12.90. This is what I call a «disposable» power tool: when it starts having problem, junk is what it will become, no point in trying to repair it. Luckily I just do some do-it-yourself work now and then, so I don’t have to care that the tools I use are more than decent. I also found a cheap workbench, which is a good change for me, in the past months I always had to do my jobs outside on a tree stub, or on the floor of my home office (remember the photos of the serial gender swap ?)