Windows 10, OpenSSH and YubiKey

You may remember that a few months ago I suggested that Windows 10 is an interesting FLOSS development platform now, and that I decided to start using Windows 10 on my Dell XPS laptop (also in the hope that the problem I had with the battery would be caused by Linux — and the answer to that is “nope”, indeed the laptop’s battery is terrible.) One of the things I realised setting all of those up, is that I found myself unable to use my usual OpenPGP-based token, and I thought I would try using a YubiKey 5 instead.

Now, between me and Yubico there’s not much love lost, but I thought I would try to make my life easier by using a smartcard that seemed to have a company interested in this kind of usage behind it. Turns out that this was only partially working, unfortunately.

The plan was to set up the PIV mode of the YubiKey 5 to provide the authentication certificate, rather than trying to use the OpenPGP mode. The reason for that is to be found on Yubico’s own website:

GPG4Win’s smart card support is not rock solid; occasionally you might get error messages when trying to access the YubiKey. It might happen after removing and re-inserting the YubiKey, or after your computer has been in sleep mode, etc. This can be resolved by restarting gpg-agent [snip]

Given that GnuPG’s own smartcard support is kind of terrible already, and not wanting to get into the yak shaving of getting that to work on Windows, I was hoping that using the more common (on Windows) interface of PKCS#11, which OpenSSH supports natively (sort of). To give a very quick and oversimplified summary, PKCS#11 is the definition of an API/ABI that end user software, such as OpenSSH, can use to interface with middleware that provides access to PKI-related functions. Many smartcard manufacturers provide ready made middleware implementing a PKCS#11 interface, which I thought Windows supported directly, but I may be wrong. Mozilla browsers rely on this particular interface to handle CA certificates as well, to the point that the NSS library that Mozilla uses is pretty much a two-part component with a PKCS#11 provider and a PKCS#11 client.

As it turns out, Yubico develops a PKCS#11 middleware for YubiKey as part of yubiko-piv-tool, and provides documentation on how to use it for SSH authentication. Unfortunately the instructions don’t really expand to including needed information for using this on Windows, as they explicitly say at the top of the page. But that would never stop me, after all. Most of the setup described in that document is perfectly applicable to Windows, by the way — until you get to the first issue…

The first issue with setting this up is that while Windows 10 does ship with OpenSSH client (and server), it does not ship with PKCS#11 support enabled. Indeed, the version provided even with 20H1 (the current most recent non-Insider build) is 7.7p1, while the current upstream release would be 8.3p1. Thankfully, Microsoft is providing a more up to date build, although that’s also still blocked at 8.1p1. The important part is that these binaries do include PKCS#11 support.

For this whole to work, you need to have both the OpenSSH binaries provided by Microsoft, and the Yubico libraries (DLL) in folders that are part of the PATH environment variable. And they also need to match the ABI. So if you’re setting this up on an x64 system, and used the 64-bit OpenSSH build, you should install the 64-bit Yubico PIV Tool, and vice-versa for 32-bit installs.

Now, despite the installer warning you that to use the PKCS#11 provider you need to have the bin folder in the PATH variable, and that loading the provider will full path will not be enough… the installer does not offer to modify the PATH itself, unlike the Git installer that does, to make it easy to use globally. This is not too terrible, because you also need to add the new OpenSSH in the PATH. For myself, I decided to use a simple OpenSSH folder in my home.

Modifying the environment variables in (English) Windows 10 is fairly straightforward: hit the Search function, and type Environment — it’ll come up with the right control panel, and you can then edit the PATH variable and just browse for the right folder.

There is one more thing you need to do, and that is to create a .ssh/config file in your home directory with the following content:

PKCS11Provider libykcs11.dll

This instructs OpenSSH to look for the Yubico PKCS#11 provider automatically instead of having to specify it on the command line. Note once again that while you could provide the full path to the DLL file, if you didn’t add it to the PATH, it would likely not load — Windows 10 is stricter in where to look for dependencies when dynamically loading a DLL. And also, you’ll get a “not a valid win32 application” error if you installed/configured the wrong version of the Yubico tool (32-bit vs 64-bit).

After that is done, ta-dah! It should work fine!

Screenshot of Windows PowerShell using a YubiKey 5 to authenticate to a Gentoo Linux system.

This works, when using PowerShell. You get asked to enter the PIN for the YubiKey, and you login just fine. Working exactly as intended there.

Unfortunately, the next step I wanted to use this for is to use VSCode to connect to my NUC, and work on things like usbmon-tools remotely, so for that to work, I needed to be able to use this authentication method through the Visual Studio Code remote host mode… and that’s not working at the time of writing. The prompt comes up, but VSCode does not appear to proxy it to anything into its UI for me to answer it.

I’m surprised, because as far as I can tell, the code responsible for the prompt uses the correct read_passphrase() function call for it to be a prompt proxied to the askpass implementation, which I thought was already taken care of by VSCode. I have not spent too much time debugging this problem yet, but if someone is more familiar than me with VSCode and can track down what should happen there, I’d be very happy to hear about it. For now, I filed an issue.

Update 2020-08-04: Rob Lourens from Microsoft moved the issue to the the right repository and pointed to another issue (filed later but in the right place).

The workaround to use this from VSCode, it’s to make sure that "remote.SSH.useLocalServer": true is set, and click on the Details link at the bottom-right corner when it’s trying to connect, to type in the PIN. At which point everything seem to work fine, and even use the connection multiplexer to avoid requesting it all the time.

Screenshot of Visual Studio Code showing a remote SSH connection to my Linux NUC with usbmon-tool open.

Network Security Services (NSS) and PKCS#11

Let’s clear first a big mess. In this post I’m going to talk about dev-libs/nss or, as the title suggests, Network Security Services which is the framework developed by Netscape first, and Mozilla Project now, for implementing a number of security layers, including (especially) SSL. This should not be confused with many others similar acronym, especially with the Name Service Switch which is the interface that allows your applications to resolve hosts and users against database they aren’t designed to use in the first place.

In my previous posts about smartcard-related software components – first and second – I started posting an UML components diagram that was not very detailed but generally readable. With time, and with the need to clarify my own understanding of the whole situation, the diagram is getting more complex, more detailed, but arguably less readable.

In the current iteration of the diagram, a number of software projects are exploded in multiple components, like I originally did with the lone OpenCryptoki project (which I should have been writing about but I hadn’t had enough time to finish cleaning off yet). In particular, I split the NSS component in two sub-components: libnss3 (which provides the actual API for the applications to use), and libnssckbi that provides access to the underlying NSS database. This is important because it shows how the NSS framework actually communicates with itself through the use of the standard PKCS#11 interface.

Anyway, back to NSS proper. To handle multiple PKCS#11 providers – which is what you want to do if you intend to use a hardware token, or a virtual one for testing – you need to register them with NSS itself. If you’re a Firefox user, you can do that from its settings windows, but if you’re a Chromium user, you’re mostly out of luck for what concerns GUI: the official way to deal with certificates et simila with Chromium is to use the NSS command-line utilities available with the utils USE flag for dev-libs/nss.

First of all, by default Mozilla, Evolution and Chromium, and the command-line utilities use three different paths to find their database: one depending on the Mozilla profile, ~/.pki/nssdb and .netscape respectively. Even more importantly, by default the first and last will use an “old” version of the db, based on the Berkeley DB interface, while the other two will use a more modern, SQLite-based database. This is troublesome.

Thankfully, the Mozilla Wiki has an article on setting up a shared database for NSS which you might want to do to make sure that you use the same set of certificates between Firefox, Chromium, Evolution and the command-line utilities. What it comes to be is just a bunch of symlinks. Read the article yourself for the instructions; on the other hand I have to note you to do this as well:

~ % ln -s .pki/nssdb .netscape

This way the nss utilities will use the correct database as well. Remember that you have to logout and log back in to tell the utilities and Firefox to use the SQL database.

Unfortunately I haven’t been able to get a token to work in this environment; from one side I’m afraid I might have busted the one Eva sent me (sigh! but at least it served the purpose of getting most of this running); from the other, Scute does not allow to upload an arbitrary certificate, but only to generate a CSR, which I obviously can’t get signed by StartSSL (which is my current certificate provider). Since I’m getting paranoid about security (even more so since I’ll probably be leaving my servers in an office when I’m not around), I’ll probably be buying an Aladdin token from StartSSL though (which also means I’ll be testing out their middleware). At that point I’ll give you more details about the whole thing.

Additional notes about the smartcard components’ diagram

Yesterday I wrote about smartcard software for Linux providing a complex UML diagram of various components involved in accessing the cards themselves. While I was quite satisfied with the results, and I feel flattered by Alon’s appreciation of it, I’d like to write a few notes about it, since it turns out it was far from being complete.

First of all, I didn’t make it clear that not all cards and tokens allows for the same stack to be used: for instance even though the original scdaemon component of GnuPG allows using both PC/SC and CT-API to interface to the readers, it only works with cards that expose the OpenPGP application (on the card itself); this was a by-design omission, mostly because otherwise the diagram would have felt pretty much unreadable.

One particularly nasty mistake I made related to the presence of OpenSSL in that diagram; I knew that older OpenSSL versions (0.9.x) didn’t have support for PKCS#11 at all, but somehow I assumed that they added support for that on OpenSSL 1.0; turns out I was wrong, and even with the latest version you’re required to use an external engine – a plugin – that makes the PKCS#11 API accessible to OpenSSL itself, and thus the software relying on it. This plugin is as usual developed by the OpenSC project.

Interestingly enough, NSS (used by Firefox, Chromium and Evolution among others) supports PKCS#11 natively, and actually seem to use that very interface to access its internal storage. Somehow, even though it has its own share of complexity problems, it makes me feel much more confident. Unfortunately I haven’t been able to make it work with OpenCryptoki, nor I have found how to properly make use of the OpenSC-based devices to store private and public keys accessible to the browser. Ah, NSS seem also to be the only component that allows access to multiple PKCS#11 providers at once.

Also, I found myself considering the diagram confusing, as software providing similar functionality was present at different heights in the image, with interfaces consumed and provided being connected on any side of the components’ representation. I have thus decided to clean it up, giving more sense to the vertical placement of the various components, from top to bottom: hardware devices, emulators, hardware access, access mediators, API providers, abstractions, libraries and – finally – user applications.

I also decided to change the colour of three interface connections: the one between applications and NSS, the new one between OpenSSL and its engines, and the one between OpenCryptoki and the ICA library (which now has an interface with a “S390” hardware device: I don’t know the architecture enough to understand what it accesses, and IBM documentation leaves a lot to be desired, but it seems to be something limited to their systems). These three are in-process interfaces (like PKCS#11 and, theoretically, PCS/SC and CT-API, more to that in a moment), while SCD, agent interfaces, Trousers and and TPM are out-of-process interfaces (requiring inter-process communication, or communication between the userland process and the kernel services) — I don’t know ICA to tell whether it uses in or out of process communication.

Unfortunately to give the proper details of in-process versus out of process communication, I would need to split up a number of those components in multiple ones, as all three of pcsc-lite and OpenCT use multiple interfaces: an in-process interface to provide the public interface to the application, and an out-of-process interface to access the underlying abstraction; the main difference between the two is where this happens (OpenCT implements all the drivers as standalone processes, and the in-process library accesses those; pcsc-lite instead has a single process that loads multiple driver plugins).

In the new diagram, I have detailed more the interaction between GnuPG, OpenSSH and their agents, simply because that situation is quite complex and even I strained to get it right: GnuPG never speaks directly with either of the SCD implementations: there is always another process, gpg-agent that interfaces them with GnuPG proper, and it is this agent to provide support for OpenSSH as well. The distinction is fundamental to show off that it is quite possible to have the PKCS#11-compatible SCD implementation provide keys for OpenSSH, even though the original ssh-agent (as well as ssh itself) have support for the PKCS#11 interface. And also, the original scdaemon implementation can, in very few cases, speak directly with the USB device via CCID.

I have decided to extend the diagram to incorporate the TPM engine for OpenSSL while at it, making it much more explicit that there is a plugin interface between OpenSSL and PKCS#11. This engine access trousers directly, without going trhough OpenCryptoki.

Speaking about OpenCryptoki, of which I’ll try to write about tomorrow, I decided to expose its internals a bit more than I have done for the rest of the packages for now. Mostly because it’ll make it much easier tomorrow to explain how it works, if I split it out this way. In particular, though, I’d like to note that it is the only component that does not expose a bridge between a daemon process and the applications using it; instead it uses an external daemon to regulate access to the devices, and then users in-process plugin-based communication to abstract access to different source of secure key storage.

OpenCT could use a similar split up, but since I’m not going to write about it for now, I don’t think I’ll work more on this. At any rate, I’m very satisfied with the new diagram. It has more details, and is thus more complex, but it also makes it possible to follow more tightly the interaction between various software, and why it is always so difficult to set up smartcard-based authentication (and hey, I still haven’t added PAM to the mix!)

Smartcards again

People seem to know by now that I have a particular passion for the security devices called smartcards. I’m not sure why myself, to be honest, but a few years ago I decided to look more into this, and nowadays I have three smartcard readers at home connected to the three main computers I use, and I use a FSFe card to store my GnuPG keys and to login to local and remote SSH services.

In Gentoo, unfortunately, most of the smartcard-related software has been vastly ignored for years, or was and still is only considered for the small use cases of developers and users, rather than in the general picture of it all. I have been trying to improve the situation ever since I first experimented with token-based login over one year and a half ago, but even my results are not really good.

The last hard work I did on the subject has been directed toward pcsc-lite improvements which brought me to hack at the code to improve support for two of the three devices I have here: the blutronics bludrive II CCID – which has a firmware quirk, requiring to look up the CCID description in the “wrong place” – and a Broadcom BCM5880 security device that provides dual-interface access to standard smartcards and for contact-less cards as well — I have to thank my trip to London two years ago for having a RFID card available at home to try it out!

Since my personal smartcard setup has been mostly complete and working fine for a while now, I wasn’t planning on working hard on anything in particular for a while, unless, like OpenCryptoki a couple of months ago, my job required me to. On the other hand, after my complaining about stable testing from last week, I started wondering if I couldn’t leverage the work I’ve been doing on OpenCryptoki to allow an easy way to test PKCS#11 software for people without the required hardware devices. Between that and a messed-up bump of OpenSC (0.12.0) in tree, I have been looking hard at the situation again.

Before moving on to describe the recent developments on the topic, though, I’d like to give an insight on why you cannot blame it on anyone in particular if the whole handling of smartcards in Gentoo. The following UML diagram is a schematic, vastly simplified component view of the software (and, very selectively, hardware) involved in smartcard access:

Smartcard Components UML diagram

In this diagram, the deep-green interfaces (circles) are those that are standardized by multiple organisations:

  • CCID is defined by the USB Forum;
  • CT-API is defined by a number of German organisations;
  • PC/SC is specified by its own workgroup which also defines the IFD interface;
  • PKCS#11 is defined by RSA.

The red components are implement as long-running services (daemons) on your Linux (or other Unix) system, the white ones are hardware devices, the blue ones are software libraries and finally the green ones are the applications the users use directly! Almost each one of those components is a standalone package (only package split in two components is GnuPG, and that’s just because Alon’s alternative SCD implementation makes it necessary to explicit the interface providers/consumers there.

This whole complexity not only makes it very difficult for distributions to manage software correctly, but also introduces a number of sensitive points of contacts between the software components, much more than one would like to have in a security-sensitive context such as Smartcards handling. Sometimes I wonder if they are really secure at all.

Back to what I have been doing in Gentoo, though. My first desire was to leverage The tpm-emulator and OpenCryptoki combo to allow arch testers to test PKCS#11 packages, such as pam_pkcs11 and pam_p11 (both of which are not part of the component diagram above by choice: to add those to the diagram, I would have had to add another indirection layer – libpam – to reach an user-accessible application like login) without the need of rare, and expensive, hardware devices. I’ve been working on OpenCryptoki’s ebuild and build system for a while, rewriting its build system and doing other general improvements — unfortunately it seems to me like it still doesn’t work as it is supposed to. I thought it could have been a problem with the software token emulation implementation, so I thought it might have been better to use the emulated TPM device, but even that method is not viable: even the latest version of the package does not seem to build properly against the current 2.6.38 Linux version, let alone the ancient version we have in the tree right now. I have a possibly-working ebuild for the 0.7 series (which uses cmake as basic build system), but since I can’t get the module to build, I haven’t committed it yet. This is likely one good candidate for the Character Device in UserSpace (CUSE) interfaces.

With the emulator being unbuildable, and the software-emulated token seemingly not working, using OpenCryptoki was thus slated for later review. I then switched my focus from that to OpenSC: version 0.12.0 was a major change, but in Gentoo it seems to have been bumped without proper consideration: for instance, the ebuild was committed with an optional pcsc-lite support, but without switches for any other alternative interface, and without any more support for the OpenCT interface that for some devices – including the iKey 3000 device that Gilles provided me with – is the only viable solution. Thanks to Alon (who’s a former Gentoo developer and an upstream developer for OpenCT/OpenSC), I was able to fix this up, and now OpenSC should be properly working in Gentoo — what is not currently implemented is support for non-OpenCT implementations of the CT-API interface; since I don’t know of other software that implements it that are available in Portage; if you know any of those, let me know and I’ll see to add support.

Now, for whatever reason, last time I worked on this, I ended up using pcsc-lite as my main hardware access provider – possibly because it is the easiest way to set it up for GnuPG and OpenPGP – and I didn’t want to throw it off right now, especially since I have a relatively good relationship with Ludovic (upstream) and I had already spent time fixing support for two of my three readers, as I said before. Thankfully, as the diagram suggests, OpenCT not only provides a CT-API interface, but also an IFD one, that can be used with pcsc-lite, providing a layered access to OpenCT-supported readers, including the ikey3k that I have here. Support for that in Gentoo, though, was not really sufficient: OpenCT didn’t install a bundle file for pcscd to discover, and the recent changes to pcscd to run without privileges disallowed the service from accessing the OpenCT sockets — I wouldn’t mind at some point moving all of the daemons to run under the same privileges, but that might not be so good an idea, and definitely not an easy one: while I can easily change the user and group settings that pcscd runs through – thanks to the Gentoo deviation I set the privileges just once, in the pcsc-lite udev rules file – it would probably require a bit of work to make sure that OpenCT and the other smartcard-enabled services didn’t step over each other’s toes. In the ~arch version of the two packages these issues are all solved, and indeed I can access the ikey3k device with pcsc_scan just fine, and from OpenSC as well.

I am unfortunately quite far from making use of the keys stored on the PKCS#11 devices in any other software than the PAM modules I have already written about. Alon’s alternative SCD implementation should make it possible to use any PKCS#11-compatible device (token or smartcard) to handle signatures for GnuPG and keys for SSH. What I’d be interested in, though, would be providing PKCS#11 interface to the OpenPGP card I have already, so to be able to mix devices. This should have been possible with OpenSC, as it implements an interface for openpgp applications and should expose it with PKCS#11 compatibility; reality, though, tends to disagree; I’m not sure whether it is simply a bug in the current code, or OpenPGPv2 cards not being supported by the project. I don’t think I’ll have enough time to work on that code anytime soon.

Alon suggested an alternative approach, by using “Scute’:http://www.scute.org/ — a project that aims at adding PKCS#11 interfaces to OpenPGP cards to be usable with Mozilla products. Unfortunately a quick check shows that the package does not build with the current version of its dependencies. And this is another task that would require me more time than I have, as I ”noted before“:https://flameeyes.blog/2011/04/storing-packing-and-disposing-of and thus will be simply slated for an undefined ”later”.

Gentoo PAM developments

here I am blogging once again bout PAM, which seems to be my main issue nowadays. First of all I have to say I’m still looking for somebody to hire me so that the complete audit can take place, especially since, as I’ll be expressing in a moment, the situation is worse than I had anticipated.

First of all, I wanted to finalise what I started over an year ago with the pam_pkcs11 support in pambase. To do so I needed to be able to connect Gilles’s token to a virtual machine (since I didn’t want to experiment on Yamato itself). Doing so I found not one, but two libvirt bugs.

The first was a problem with passing the device bus and number; libvirt sent them prefixed with zeroes to form 3-digit numbers; but then QEmu interpreted them as octal numbers, so 001:016 became 1.14. Easy fix by swapping two sprintf() calls. The second was nastier and I was able to complete the fix just yesterday: when the kernel has support for CGROUP, libvirt uses it as a security measure, to ensure that the virtual machines can’t allocate more memory than they are supposed to, or access devices they are not supposed to. Unfortunately, if you asked libvirt to connect an USB device to QEmu, its device pair wasn’t added to the list, so QEmu was unable to use it. The first is fixed in 0.8.5; the latter in the r1 backports in Gentoo, and sent upstream to be fixed there as well.

Beside dealing with the bugs in libvirt, I also made some changes to the new pambase branch using M4, which actually works as intended now. Thanks to the comments on the previous article the situation is improving actually. In particular thanks to MK_FG, I tried again the substack/sufficient method and it works quite fine. Using simply sufficient will create a problem if you don’t want to have a stop-stack feature at the end of the system-auth (which would create other problems to other services, as I’ve learnt the hard way before), so this should be much better.

Indeed, in the new branch there is implemented support for pam_pkcs11, pam_ssh, pam_krb5 and pam_unix all together! Also, for the password-changing service now supports running both pam_passwdqc and pam_cracklib (before, only one would on the service). It doesn’t, though, work for changing the PIN of smartcards or the Kerberos password. I’m going to implement pam_p11 support soon enough.

While working on this though, and having a number of stable requests going on to fix various things (like the shadow problems and ConsoleKit), I also found that two days ago a new Linux-PAM version was released, 1.1.3, with a few security fixes that will likely require a quick stable. But more than security there is another reason why this version is notable.

You might remember that last time I stated that only two patches were applied on version 1.1.2. Well, this time around no patches are applied over the released Linux-PAM! This makes it the first version in five years that Gentoo is shipping without custom patches at all, and thus without needing re-building autotools. It is indeed a milestone for us.

Dropping the old 1.1.0 version also meant removing four extra patches from the tree; once 1.1.3 will be stabled on all arches, I’ll be removing the remaining patches, which account for about 14KiB of the tree as it is.

After all these good news, there are bad news as well; as it happens, while I’m the only person in the PAM team, the one that is following Linux-PAM, pambase and the like (soon to be joined by Constanze luckily!), there are a number of other people who add PAM modules. Lately, two fingerprint-based PAM modules were added to the tree, and both have multiple mistakes in them. Am I happy about that? Not really.

Beside, there are still problems with symbol collisions; for some packages they are easier to fix than others…

New pambase choices

So while I’m still hoping somebody will hire me to finish do a complete audit I’ve at least started working on the new pambase code. To do so I had to make a few more choices than simply maintaining the current status-quo in running state.

First of all, I changed the backend language used to describe the rules. Up to now I abused the C preprocessor with the C macro language; this allows for arithmetic comparison of (properly formatted) version numbers but doesn’t allow for increments and decrements, and it’s not extremely flexible. The new pambase will make use instead of GNU M4 (the same language used by autoconf). M4 is designed to be a macro language by itself, which makes it very simple to implement the kind of copy-and-paste of rules that pambase needs. Not only that but it’s already part of the system set both because of autoconf and because it is one of the standard POSIX utilities.

The second decision to make is a hard one and that is to actually stop proactively support OpenPAM and the FreeBSD operating system. It’s not something I’m doing lighthearted, and I’ll make sure not to force the requirement for Linux-PAM more than it is needed, but right now there is just not enough help to support both implementations. Plus while it made totally sense to support OpenPAM when I first added support for it in Portage (with Linux-PAM series 0.78), with the most recent releases, in particular 1.1 series, Linux-PAM not only doubled up the featureset of OpenPAM, but it also provides a clean interface and very polished code. By focusing more on Linux-PAM (staying, though, as independent as possible from the operating system) it’s quite possible to handle multiple authentication schemes.

Speaking about authentication schemes, when I first implemented Kerberos support in pambase there has been a few problems to be polished with it. For once, chaining a number of authentication schemes is not easy: you cannot use the required option, obviously, because you authenticate usually only against one of the authentication schemes at once; you cannot use optional because otherwise you might login even if all the schemes fail; you cannot use sufficient because that stops the chain at the first authentication that works, and you might have further restrictions in chained services.

The only solution I could find was to move further the solution I applied to Kerberos: using Linux-PAM”s advanced result specification, if any authentication succeed, then instead of proceeding with the rest of method specifications, it jumps directly to the end of the current chain, where a pam_permit.so entry will let authentication succeed. if none of the authentication methods succeed, then there is a pam_deny call that ensures that login fails.

Another problem related to multiple authentication schemes is how to handle password changing, which is another problem that we have faced with shadow. Right now we have a lot of configuration files specifying password method chains. A lot of those have likely be added due to misunderstanding that service class as “check against the password” (which is not the case, that’s auth!). For instance, sshd by default provides a password class chain, but OpenSSH does not allow you to change your password in any way.

While cleaning up all the configuration files to ensure that they only list the services they support is something that requires the full audit that I wrote about, at least I will predispose the new pambase to handle that correctly. This means that system-auth will no longer provide password chains; instead a system-password chain will be added that will take care of that and will be used by the very few packages that allow for changing passwords (such as shadow). interestingly enough, the situation here is going to be quite different from what we have now. Many of the alternative authentication methods (PKCS#11, OTP, Yubikey) will not allow to change the authentication password, so they shouldn’t be listed there; some others have different tools to change password, such as Kerberos (kadmin) and pam_ssh, and would most likely not have to be listed there. But for those that have to be listed, including Gnome-Keyring, changing password should act on all of them, not just one, so the skip system described above cannot apply there.

Unfortunately, not only this require quite some changes on the pambase package, but it has to be coordinated with a number of other packages, such as shadow, sshd and so on. Given this, don’t expect it until mid-to-end of November. Probably later if I find some other job to follow. Once again, if somebody is interested in having better PAM support in Gentoo, it can be done faster, but not in my spare time. It’s not something a single volunteer can deal with in spare time.

How special PAM supports gets added to Gentoo

You might wonder why the PAM support for special authentication method is somewhat lacking in Gentoo; the reason is that, mostly, I maintain PAM alone, which means that you get to use whatever I use myself most of the time. One of the things that I was very upset we didn’t support properly was the Smartcard/Token based authentication; unfortunately, while I got two smartcard readers in the past months to do some work, I hadn’t fetched a smartcard yet, and tokens seem to be quite difficult to find for end users like me.

Thanks to Gilles (Eva), I now have a token to play with, and that means I’m looking to write up proper support for token-based authentication (and thus, smartcard-based as well). This already started well, because I was able to get one patch (split in three) merged in pam_pkcs11 upstream (available in the gentoo 0.6.1-r1 ebuild), as well as cleaning up the ebuild to work just like it’s supposed to as a PAM ebuild (for instance not installing the .la files which are not used at all).

But since this is not yet ready to use, it’s easier if I show you how it works after a day or two of tweaking:

Yes today I was quite bored.

Please note that this is not really “production ready” in my opinion:

  • the pam_pkcs11 module uses the /etc/pam_pkcs11 directory for configuration, but almost all PAM modules use /etc/security for their configuration;
  • the pkcs11_eventmgr daemon has to be started by the user manually, but it uses a single, system-wide configuration file (/etc/pam_pkcs11/pkcs11_eventmgr.conf), this does not really seem to be the right way to handle it for me, but I’ll have to discuss that with upstream;
  • most likely we want to provide, based on USE flag or in a different ebuild, some scripts to handle the event manager more easily, for instance making it start on each X and console login, and making sure that the login is locked as soon as the key is removed;
  • the event manager polls for the card, which is using CPU and power for no real good reason; a proper way to handle this would require for udev to send signals on plug and remove so that the event manager can handle that; since the exact key needed is unlikely to be known at rules-generation time, this might require adding a central daemon monitoring all the smartcards and tokens and passing the information to registered event managers.

This mostly means that there’s going to be a long way to go before this is ready, and I’m pretty sure I’ll have to write a complete documentation on how to set it up, rather than just a blog post with a video, but at least it’s going to be feasible, at one point.

Please feel free to comment on whether the video is useful at all or not; I’m trying to experiment with less boring methods of explaining stuff related to Gentoo and free software in general, but I have no clue whether it’s working or not, yet.