GnuPG Agent Forwarding with OpenPGP cards

Finally, after many months (a year?) absence, I’m officially back as a Gentoo Linux developer with proper tree access. I have not used my powers much yet, but I wanted to at least point out why it took me so long to make it possible for me to come back.

There are two main obstacles that I was facing, the first was that the manifest signing key needed to be replaced for a number of reasons, and I had no easy access to the smartcard with my main key which I’ve been using since 2010. Instead I set myself up with a separate key on a “token”: a SIM-sized OpenPGP card installed into a Gemalto fixed-card reader (IDBridge K30.) Unfortunately this key was not cross-signed (and still isn’t, but we’re fixing that.)

The other problem is that for many (yet not all) packages I worked on, I would work on a remote system, one of the containers in my “testing server”, which also host(ed) the tinderbox. This means that the signing needs to happen on the remote host, although the key cannot leave the smartcard on the local laptop. GPG forwarding is not very simple but it has sort-of-recently become possible without too much intrusion.

The first thing to know is that you really want GnuPG 2.1; this is because it makes your life significantly easier as the key management is handed over to the Agent in all cases, which means there is no need for the “stubs” of the private key to be generated in the remote home. The other improvement in GnuPG 2.1 is that there is better sockets’ handling: on systemd it uses the /run/user path, and in general it uses a standard named socket with no way to opt-out. It also allows you to define an extra socket that is allowed to issue signature requests, but not modify the card or secret keys, which is part of the defence in depth when allowing remote access to the key.

There are instructions which should make it easier to set up, but they don’t quite work the way I read them, in particular because they require a separate wrapper to set up the connection. Instead, together with Robin we managed to figure out how to make this work correctly with GnuPG 2.0. Of course, since that Sunday, GnuPG 2.1 was made stable, and so it stopped working, too.

So, without further ado, let’s see what is needed to get this to work correctly. In the following example we assume we have two hosts, “local” and “remote”; we’ll have to change ~/.gnupg/gpg-agent.conf and ~/.ssh/config on “local”, and /etc/ssh/sshd_config on “remote”.

The first step is to ask GPG Agent to listen to an “extra socket”, which is the restricted socket that we want to forward. We also want for it to keep the display information in memory, I’ll get to explain that towards the end.

# local:~/.gnupg/gpg-agent.conf

keep-display
extra-socket ~/.gnupg/S.gpg-agent.remote

This is particularly important for systemd users because the normal sockets would be in /run and so it’s a bit more complicated to forward them correctly.

Secondly, we need to ask OpenSSH to forward this Unix socket to the remote host; for this to work you need at least OpenSSH 6.7, but since that’s now quite old, we can be mostly safe to assume you are using that. Unlike GnuPG, SSH does not correctly expand tilde for home, so you’ll have to know the actual paths we want to write the unix at the right path.

# local:~/.ssh/config

Host remote
RemoteForward /home/remote-user/.gnupg/S.gpg-agent /home/local-user/.gnupg/S.gpg-agent.remote
ExitOnForwardFailure yes

Note that the paths need to be fully qualified and are in the order remote, local. The ExitOnForwardFailure option ensures that you don’t get a silent failure to listen to the socket and fight for an hour trying to figure out what’s going on. Yes, I had that problem. By the way, you can combine this just fine with the now not so unknown SSH tricks I spoke about nearly six years ago.

Now is the slightly trickier part. Unlike the original gpg-agent, OpenSSH will not clean up the socket when it’s closed, which means you need to make sure it gets overwritten. This is indeed the main logic behind the remote-gpg script that I linked earlier, and the reason for that is that the StreamLocalBindUnlink option, which seems like the most obvious parameter to set, does not behave like most people would expect it to.

The explanation for that is actually simple: as the name of the option says, this only works for local sockets. So if you’re using the LocalForward it works exactly as intended, but if you’re using RemoteForward (as we need in this case), the one on the client side is just going to be thoroughly ignored. Which means you need to do this instead:

# remote:/etc/sshd/config

StreamLocalBindUnlink yes

Note that this applies to all the requests. You could reduce the possibility of bugs by using the Match directive to reduce them to the single user you care about, but that’s left up to you as an exercise.

At this point, things should just work: GnuPG 2.1 will notice there is a socket already so it will not start up a new gpg-agent process, and it will still start up every other project that is needed. And since as I said the stubs are not needed, there is no need to use --card-edit or --card-status (which, by the way, would not be working anyway as they are forbidden by the extra socket.)

However, if you try at this point to sign anything, it’ll just fail because it does not know anything about the key; so before you use it, you need to fetch a copy of the public key for the key id you want to use:

gpg --recv-key ${yourkeyid}
gpg -u ${yourkeyid} --clearsign --stdin

(It will also work without -u if that’s the only key it knows about.)

So what is about keep-display in local:~/.gnupg/gpg-agent.conf? One of the issues I faced with Robin was gpg failing saying something about “file not found”, though obviously the file I was using was found. A bit of fiddling later found these problems:

  • before GnuPG 2.1 I would start up gpg-agent with the wrapper script I wrote, and so it would usually be started by one of my Konsole session;
  • most of the time the Konsole session with the agent would be dead by the time I went to SSH;
  • the PIN for the card has to be typed on the local machine, not remote, so the pinentry binary should always be started locally; but it would get (some of) the environment variables from the session in which gpg is running, which means the shell on “remote”;
  • using DISPLAY=:0 gpg would make it work fine as pinentry would be told to open the local display.

A bit of sniffing around the source code brought up that keep-display option, which essentially tells pinentry to ignore the session where gpg is running and only consider the DISPLAY variable when gpg-agent is started. This works for me, but it has a few drawbacks. It would not work correctly if you tried to use GnuPG out of the X11 session, and it would not work correctly if you have multiple X11 sessions (say through X11 forwarding.) I think this is fine.

There is another general drawback on this solution: if two clients connect to the same SSH server with the same user, the last one connecting is the one that actually gets to provide its gpg-agent. The other one will be silently overruled. I”m afraid there is no obvious way to fix this. The way OpenSSH itself handles this for the SSH Agent forwarding is to provide a randomly-named socket in /tmp, and set the environment variable to point at it. This would not work for GnuPG anymore because it now standardised the socket name, and removed support for passing it in environment variables.

NFC and payment cards, be scared now.

In my previous post I have warned people not to share the output of cardpeek with others, as it includes data including the full 16-digits (or 15 or 19, depending on the type) of the card, and its expiration date.

The reason why that happens, is that the EMV implementation requires exposing the data from the magnetic stripe over the chip; this data is defined as Track1, Track2 and Track3 — classically only the first two are of relevance for credit cards, but at least for Italian debit cards (Bancomat), like the one I discussed yesterday, it is present.

Track1 contains the name on the card, while Track2 contains, as I said, the full card number and expiration date. The only thing that is missing is the CVV/CVC/CV2, you name it, the three digits that are printed (not embossed!) on the back of the card. Recording magnetic stripe data is trivial with a skimmer – if you’re interested, check Krebs’s blog – but recording the data from a chip is not much more complex, if you can hack the firmware of the terminal device.

The difficulty, after copying the tracks’ data, is to make a copy of the card itself. At least in theory, the private key used for enciphered PIN verification is supposedly impossible to extract, which makes duplicating a chip not feasible — again, in theory, as I’ve pointed out how many different CVM policies are configured on cards, and some of them do not require enciphered PIN (In particular Italian debit cards seem to be the worst offenders). Similarly, online transactions nowadays always require the CVV code, which is not available on either the magnetic stripe or the EMV data.

On the other hand, the fact that magnetic-stripe usage is still allowed (and it is vastly due to the United States not having moved to the new technology yet), means that just snooping Track1 and Track2 data allows for in-store transactions with a fake card. It’s thus just a matter of lower direct benefits for fraudsters if chip-and-pin cards are not usually cloned in Europe: even if you can read the data with a hacked terminal, you have to sell the data somewhere else to be used.

But all I said up to now involves having the card in your hands, or using a hacked terminal, both options are pretty risky. There is a more “interesting” approach, thanks to the current move to NFC-enabled payment cards (my Irish debit card has it too). It does not really look like one of the NCIS episodes with the fraudster just brushing off people in the street, but it comes close enough.

While NFC payment only works for non-CVM-required transactions (less than 15 euro or 25 dollars), it does expose the full tracks’ data over the contactless interface, which means it’s just a tap away from being cloned. Sure, you still need physical contact with the card, but there are a few reasons why I find it much more worrisome than cloning from the stripe or chip.

The first problem is that the tools required to skim the data out of the chip or the magnetic stripe are much harder to come by than the NFC. Because you only need an NFC-enabled phone (such as a Nexus 4 or 5) and the right app. You can for instance look at cardtest which will show you all the details of the card just by tapping it on the phone — the app will hide the full number of the card, but that is done in software, the NFC inspection would read the full number already.

And the card itself will gladly talk through your average wallet – sure there are RFID blocking wallets but they are rarely good quality – so it’s just a matter of getting the phone, or one of the many RFID readers over, or under, the wallet. Maybe it’s my wannabe-writer imagination at work here, but I can see how it’s easy to set up a few strategically-placed RFID readers embedded on the table around the till of a store can read a lot of cards, even those that are not being used to pay muddling the waters quite a bit.

There is another point of view as well, that can be interesting. Even cards that are NFC-enabled are mailed, at least in Europe, through standard paper envelopes. These do nothing to protect you from NFC skimmers; a malicious postman can easily skim the cards with his unmodified cellphone, by just tapping the letters when they like they contain a card of some kind. I tried this myself the other day as I received an Irish, government-issued card through the mail: just leaving it on top of my laptop and running pcsc_scan made it work, and using my cellphone was just as easy. All the time without opening or even making it look like the envelope was tampered with! And yes, of course the cards are not shipped active, but just wait a week or two and they’ll be — it’s rare for people like me having to wait to ship them to a different country before they get enabled.

So what can we do about this? Well, I’m not sure, I’m not that much of an expert. My best bet up to now is to add as many NFC-enabled cards (Leap, DublinBikes, ZapaTag, Oyster, etc.) on my wallet, to mix up the signal from the actual payment card. This tends to work, but it’s just a matter of tries until the right card comes up. I guess it’s time for me to consider buying one of those two-dozen-cards aluminium holders, which are usually shielded against RFID access, and for you too.

Other than that, the usual advices apply: make sure to check your statements, and report quickly to your institution if something is looking odd!

My time abroad: chip’n’pin

A couple of months ago I started gathering content to write about payment cards of various types, after discussing with some colleagues about the difference in payment cards between countries. I still have the draft there, with a bunch of connected links to expand upon, but I realized that it was going to become really unwieldy and, honestly, not interesting to the mass anyway. I decided then to limit myself and to provide some commentary on one of the banes of my existence here in Dublin: chip-and-pin cards.

My American readers might know chip-and-pin just by name; my Italian readers will probably not know what it is at all, given that the term was never really used in Italy, but it’s very much in use. In Europe, most of the payment (credit and debit) cards in Europe are actually smartcards, and their chip, rather than the magnetic band, is used for the payment. In the US this is not that common at all, although this is changing as we speak.

The presence of the chip, though, does not by itself make the card a chip-and-pin card. Indeed, I have two credit cards I brought from Italy, and both have trouble working in Ireland, where chip-and-pin has been forced for a while — the same is true in the UK, and indeed when I first visited London, I knew it was the case, but my bank manager, and the documentation he had available, had no clue about it. Instead, my Italian cards are chip-and-signature: you don’t swipe them in, but then you still get the same kind of receipt that you have to sign for. This has been the default for credit cards in Italy for the longest time. Some banks, and American Express, do provide Italian customers with chip-and-pin cards; on the other hand, I’ve been told that US Amex provides chip-and-signature cards nowadays.

But the funny part is that one of the two Italian credit cards I have does have a PIN, and I know I’ve been asked for it at least once in London. So how does that work? If you own a smartcard reader – I do – you can easily find out the way it works using cardpeek. This tool includes inspector protocols for a series of different smartcard applications, including EMV, the application type used by chip-and-pin (and also chip-and-signature) cards.

All of this combined together makes for a headache of some cards working in some countries and not others (my Irish debit card does not reliably work in the US, but sometimes it does, one of my Italian credit cards always works fine in the US but does not work in Switzerland, and so on so forth). Unfortunately I did not bring with me the collection of older cards that I owned, or I could be trying an American Express too, so I’ll have to stop my description at an Italian debit card (chip-and-pin), an Italian credit card (chip-and-signature), and Irish debit card (chip-and-pin, contactless), and an Irish credit card (chip-and-pin).

When you inspect an EMV card with cardpeek, you can identify the Cardholder Verification Method (CVM) records, which are, basically, an ordered list of options to validate a transaction. In the case of my Italian credit card, these read:

  • Fail cardholder verification if this CVM is unsuccessful: Signature (paper) — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: Enciphered PIN verified online — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: Plaintext PIN verification performed by ICC — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: No CVM required — Always

What this implements is a very restrictive CVM list, and in particular if the terminal supports paper signatures, that’s the only option that the chip gives to the vendor. Now, in Ireland there are many terminals that theoretically support signature verification, but the vendors themselves will not accept them; the reason is that the liability in that case lies with the vendor, rather than its bank, in case of fraud. The same problem in Italy is tackled by requiring photo ID every time you use the credit card, but that is not the case here in Ireland as no photo ID is mandatory to possess.

It’s very interesting to check the Italian debit card’s CVM too. It’s interesting because the card have two applications installed in it: one is Maestro and the other is PagoBANCOMAT, the Italian banks-operated debit card circuit. The latter has a single CVM supported: “Fail cardholder verification if this CVM is unsuccessful: Plaintext PIN verification performed by ICC — Always”, which basically means that every single operation happens through the card’s verification of the user’s PIN. On the other hand, the Maestro app has a list:

  • Apply succeeding CV rule if this rule is unsuccessful: Enciphered PIN verified online — If unattended cash
  • Fail cardholder verification if this CVM is unsuccessful: Enciphered PIN verified online — If manual cash
  • Fail cardholder verification if this CVM is unsuccessful: Plaintext PIN verification performed by ICC — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: Enciphered PIN verified online — Always

You can see that it’s an interestingly complicated series of options; in particular it seems like “manual cash” only works with online PIN, and it’s preferred to use online PIN for unattended cash, but for everything else, if the terminal supports offline PIN, that’s what it has to use. I’m not sure why this happens, but this particular card does not always work here in Ireland either.

So what about the second Italian credit card?

  • Apply succeeding CV rule if this rule is unsuccessful: Signature (paper) — If terminal supports the CVM
  • Apply succeeding CV rule if this rule is unsuccessful: Enciphered PIN verified online — If terminal supports the CVM
  • Apply succeeding CV rule if this rule is unsuccessful: Plaintext PIN verified online — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: No CVM required — If terminal supports the CVM

So this card is actually very permissive; it’s probably not by chance that this is the only card I can use in the US without risks of getting it rejected. The Irish debit card is a bit more complex too, and not as reliable in the US:

  • Apply succeeding CV rule if this rule is unsuccessful: Enciphered PIN verified online — If unattended cash
  • Apply succeeding CV rule if this rule is unsuccessful: Enciphered PIN verification performed by ICC — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: Plaintext PIN verification performed by ICC — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: Enciphered PIN verified online — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: Signature (paper) — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: No CVM required — Always

Again, unattended cash prefers online verification, but then everything else prefers offline. Unlike the Italian debit card, though, enciphered PIN is preferred over plaintext one. And surprisingly enough, the same CVM is present on the NFC interface.

Finally, this is the CVM list for my Irish credit card:

  • Apply succeeding CV rule if this rule is unsuccessful: Enciphered PIN verified online — If terminal supports the CVM
  • Apply succeeding CV rule if this rule is unsuccessful: Enciphered PIN verification performed by ICC — If terminal supports the CVM
  • Apply succeeding CV rule if this rule is unsuccessful: Plaintext PIN verification performed by ICC — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: Signature (paper) — If terminal supports the CVM
  • Fail cardholder verification if this CVM is unsuccessful: No CVM required — Always

This resembles a lot the permissiveness of the second Italian card (but just for reference, that one is a MasterCard while the Irish one is a Visa). And indeed it matches the fact that this card also works flawlessly in the US. Unlike the Italian one, though, the PIN is never transmitted in plaintext for online verification, and it’s only used as second-to-last resort within the ICC itself.

So when you expect things to be easy because your card is “chip-and-pin”, try to keep in mind that it might not be strictly true. If you’re curious about your own debit and credit card, and you happen to have a smartcard reader, take a look at cardpeek and ask it to analyze an EMV card. Keep in mind that what you read out of the card itself is not to be shared with anybody as is! The full number of the card, as well as the expiration date and a little more private data is present in the EMV dump that cardpeek produce. For some cards, such as my Italian MasterCard, a log of the most recent transactions executed on a terminal is also available.

Smartcards again

People seem to know by now that I have a particular passion for the security devices called smartcards. I’m not sure why myself, to be honest, but a few years ago I decided to look more into this, and nowadays I have three smartcard readers at home connected to the three main computers I use, and I use a FSFe card to store my GnuPG keys and to login to local and remote SSH services.

In Gentoo, unfortunately, most of the smartcard-related software has been vastly ignored for years, or was and still is only considered for the small use cases of developers and users, rather than in the general picture of it all. I have been trying to improve the situation ever since I first experimented with token-based login over one year and a half ago, but even my results are not really good.

The last hard work I did on the subject has been directed toward pcsc-lite improvements which brought me to hack at the code to improve support for two of the three devices I have here: the blutronics bludrive II CCID – which has a firmware quirk, requiring to look up the CCID description in the “wrong place” – and a Broadcom BCM5880 security device that provides dual-interface access to standard smartcards and for contact-less cards as well — I have to thank my trip to London two years ago for having a RFID card available at home to try it out!

Since my personal smartcard setup has been mostly complete and working fine for a while now, I wasn’t planning on working hard on anything in particular for a while, unless, like OpenCryptoki a couple of months ago, my job required me to. On the other hand, after my complaining about stable testing from last week, I started wondering if I couldn’t leverage the work I’ve been doing on OpenCryptoki to allow an easy way to test PKCS#11 software for people without the required hardware devices. Between that and a messed-up bump of OpenSC (0.12.0) in tree, I have been looking hard at the situation again.

Before moving on to describe the recent developments on the topic, though, I’d like to give an insight on why you cannot blame it on anyone in particular if the whole handling of smartcards in Gentoo. The following UML diagram is a schematic, vastly simplified component view of the software (and, very selectively, hardware) involved in smartcard access:

Smartcard Components UML diagram

In this diagram, the deep-green interfaces (circles) are those that are standardized by multiple organisations:

  • CCID is defined by the USB Forum;
  • CT-API is defined by a number of German organisations;
  • PC/SC is specified by its own workgroup which also defines the IFD interface;
  • PKCS#11 is defined by RSA.

The red components are implement as long-running services (daemons) on your Linux (or other Unix) system, the white ones are hardware devices, the blue ones are software libraries and finally the green ones are the applications the users use directly! Almost each one of those components is a standalone package (only package split in two components is GnuPG, and that’s just because Alon’s alternative SCD implementation makes it necessary to explicit the interface providers/consumers there.

This whole complexity not only makes it very difficult for distributions to manage software correctly, but also introduces a number of sensitive points of contacts between the software components, much more than one would like to have in a security-sensitive context such as Smartcards handling. Sometimes I wonder if they are really secure at all.

Back to what I have been doing in Gentoo, though. My first desire was to leverage The tpm-emulator and OpenCryptoki combo to allow arch testers to test PKCS#11 packages, such as pam_pkcs11 and pam_p11 (both of which are not part of the component diagram above by choice: to add those to the diagram, I would have had to add another indirection layer – libpam – to reach an user-accessible application like login) without the need of rare, and expensive, hardware devices. I’ve been working on OpenCryptoki’s ebuild and build system for a while, rewriting its build system and doing other general improvements — unfortunately it seems to me like it still doesn’t work as it is supposed to. I thought it could have been a problem with the software token emulation implementation, so I thought it might have been better to use the emulated TPM device, but even that method is not viable: even the latest version of the package does not seem to build properly against the current 2.6.38 Linux version, let alone the ancient version we have in the tree right now. I have a possibly-working ebuild for the 0.7 series (which uses cmake as basic build system), but since I can’t get the module to build, I haven’t committed it yet. This is likely one good candidate for the Character Device in UserSpace (CUSE) interfaces.

With the emulator being unbuildable, and the software-emulated token seemingly not working, using OpenCryptoki was thus slated for later review. I then switched my focus from that to OpenSC: version 0.12.0 was a major change, but in Gentoo it seems to have been bumped without proper consideration: for instance, the ebuild was committed with an optional pcsc-lite support, but without switches for any other alternative interface, and without any more support for the OpenCT interface that for some devices – including the iKey 3000 device that Gilles provided me with – is the only viable solution. Thanks to Alon (who’s a former Gentoo developer and an upstream developer for OpenCT/OpenSC), I was able to fix this up, and now OpenSC should be properly working in Gentoo — what is not currently implemented is support for non-OpenCT implementations of the CT-API interface; since I don’t know of other software that implements it that are available in Portage; if you know any of those, let me know and I’ll see to add support.

Now, for whatever reason, last time I worked on this, I ended up using pcsc-lite as my main hardware access provider – possibly because it is the easiest way to set it up for GnuPG and OpenPGP – and I didn’t want to throw it off right now, especially since I have a relatively good relationship with Ludovic (upstream) and I had already spent time fixing support for two of my three readers, as I said before. Thankfully, as the diagram suggests, OpenCT not only provides a CT-API interface, but also an IFD one, that can be used with pcsc-lite, providing a layered access to OpenCT-supported readers, including the ikey3k that I have here. Support for that in Gentoo, though, was not really sufficient: OpenCT didn’t install a bundle file for pcscd to discover, and the recent changes to pcscd to run without privileges disallowed the service from accessing the OpenCT sockets — I wouldn’t mind at some point moving all of the daemons to run under the same privileges, but that might not be so good an idea, and definitely not an easy one: while I can easily change the user and group settings that pcscd runs through – thanks to the Gentoo deviation I set the privileges just once, in the pcsc-lite udev rules file – it would probably require a bit of work to make sure that OpenCT and the other smartcard-enabled services didn’t step over each other’s toes. In the ~arch version of the two packages these issues are all solved, and indeed I can access the ikey3k device with pcsc_scan just fine, and from OpenSC as well.

I am unfortunately quite far from making use of the keys stored on the PKCS#11 devices in any other software than the PAM modules I have already written about. Alon’s alternative SCD implementation should make it possible to use any PKCS#11-compatible device (token or smartcard) to handle signatures for GnuPG and keys for SSH. What I’d be interested in, though, would be providing PKCS#11 interface to the OpenPGP card I have already, so to be able to mix devices. This should have been possible with OpenSC, as it implements an interface for openpgp applications and should expose it with PKCS#11 compatibility; reality, though, tends to disagree; I’m not sure whether it is simply a bug in the current code, or OpenPGPv2 cards not being supported by the project. I don’t think I’ll have enough time to work on that code anytime soon.

Alon suggested an alternative approach, by using “Scute’:http://www.scute.org/ — a project that aims at adding PKCS#11 interfaces to OpenPGP cards to be usable with Mozilla products. Unfortunately a quick check shows that the package does not build with the current version of its dependencies. And this is another task that would require me more time than I have, as I ”noted before“:https://flameeyes.blog/2011/04/storing-packing-and-disposing-of and thus will be simply slated for an undefined ”later”.

pcsc-lite and the Gentoo diversion

You probably all know that I’m not really keen on diverging Gentoo from other distributions as long as it’s feasible, although I’m also always considering the idea of giving an extra edge to our distribution. Today, though, is one of the days when I have to implement something differently in Gentoo to have it working internally.

Today, Ludovic Rousseau released a new pcsc-lite version that is designed to improve autostart and increase safety and security by replacing a setuid-root with a setgit-pcscd. Together with that, a new ccid wrapper that sets the permission on USB devices via UDEV was released.

Now, while this looks all good stuff that improves user experience, this is mostly designed to solve issues with binary distributions — most likely, Ubuntu and derived. Autostart is mostly designed to avoid using a pcscd system service; in Gentoo that’s not much of a problem because it’s the user’s choice to start or not a init script, but on other distributions as soon as you install the package, the init script is scheduled to start. Once again, that’s not much of a problem when you install a server package, as that’s the whole point of it, but the pcscd service has to be bundled with the client library — and the client library is decided at build-time, so likely enabled for many packages, in those distributions. Again these aren’t enough of a concern for us, thanks to our customisable design.

On the other hand, the new design is troublesome for us: the daemons are started with the privileges of the current user, but access to the pcscd group; that would be okay if it wasn’t that it needs to create files in the /var/run/pcscd directory, which we cannot simply create in the ebuild – as /var/run could be on a tmpfs instance – and cannot simply be re-created by pcscd; it worked before because as setuid root it had all the privileges to do so. Ludovic suggested to create a reduced init script whose only task is to create the directory at startup, but at that point, why reducing to simply creating the directory?

The end result is as follows: the init script is updated, it creates the directory, alright, but now it executes the pcscd process under privileges of nobody/pcscd, rather than root, tightening security a little bit. More importantly, thanks to the fact that USB decices (and other hotpluggable devices) are handled through udev permissions, I’ve also created an extra rules file that hotplugs the service if a new device is added to the pcscd group, which gives a slightly better usability to Gentoo than before.

Unfortunately, this complicates the matter further, as versions of ccid and pcsc-lite need to go hand in hand, stabling them is a PITA; the fact that ifd-gempc is also not fixed yet really don’t make it any nicer. On the other hand, the presence of this now Gentoo-specific hotplug path through UDEV also mean that we could finally drop the HAL support in this ebuild, as that was supposed to provide the generic hotplug mean before.

I hope this situation will turn out good for everybody; if anything seems to be amiss, don’t hesitate to open a bug or tell me directly, and I’ll look into it.

Using the SHA2 hash family with OpenPGPv2 cards and GnuPG

I’m sure I said this before, but I don’t remember when or to who, but most of the time it feels to me like GnuPG only works out of sheer luck, or sometimes fails to work just for my disgrace. Which is why I end up writing stuff down whenever I come to actually coerce it into behaving as I wish.

Anyway, let’s start with a bit of background; a bit of time ago, the SHA1 algorithm has been deemed by most experts to be insecure, which means that relying on it for Really Important Stuff was a bad idea; I still remember reading this entry by dkg that provided a good start to set up your system to use the SHA2 family (SHA256 in particular).

Unfortunately, when I actually got the FSFe smartcard and created the new key I noticed (and noted in the post) that only SHA1-signature worked; I set up the card to use SHA1 signatures, and forgot about it, to be honest. Today though, I went to sign an email and … it didn’t work, it reported me that the created signature was invalid.

A quick check around and it turns out that for some reason GnuPG started caring about the ~/.gnupg/gpg.conf file rather than the key preferences; maybe it was because I had to reset the PIN on the card when I mistyped it on the laptop too many times (I haven’t turned off the backlight since!). The configuration file was already set to use SHA256, so that failed because the card was set to use SHA1.

A quick googling around brought me to an interesting post from earlier this year. The problem as painted there seemed to exist only with GnuPG 1.4 (so not the 2.0 version I’m using) and was reportedly fixed. But the code in the actual sources of 2.0.16 tell a different story: the bug is the same there as it was in 1.4 back in January. What about 1.4? Well it’s also not fixed in the last release, but it is on the Subversion branch — I noticed that only afterwards, though, so you’ll see why that solution differs from mine.

Anyway, the problem is the same, in the new source file: gpg does not ask the agent (and thus scdaemon) to use any particular encoding if not RMD160, which was correct for the old cards but it definitely is not for the OpenPGP v2 that FSFE is now providing its fellows with. If you want to fix the problem, and you’re a Gentoo user, you can simply install gnupg-2.0.16-r1 from my overlay while if you’re not using Gentoo but building it by hand, or you want to forward it to other distributions’ packages, the patch is also available…

And obviously I sent it upstream and I’m now waiting on their response to see if it’s okay to get it applied in Gentoo (with a -r2). Also remember that you have to edit your ~/.gnupg/gpg.conf to have these lines if you want to use the SHA2 family (SHA256 in this particular case):

personal-digest-preferences SHA256
cert-digest-algo SHA256
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed

Smart Cards and Secret Agents

Update, 2016-11: The following information is fairly out of date, six years later, as now GnuPG uses stable socket names, which is good. Please see this newer post which includes some information on setting up agent forwarding.

I’ve been meaning to write about my adventure to properly set up authentication using the Fellowship of FSFe smartcard for quite a while, and since Markos actually brought the subject up earlier tonight I guess today is the right time. Incidentally, earlier in my “morning” I had to fight with getting it working correctly on Yamato so it might be useful after all…

First of all, what is the card and what is needed to use it… the FSFe Fellowship card is a smartcard with the OpenPGP application on it; smartcards can have different applications installed, quite a few are designed to support PKCS#11 and PKCS#15, but those are used by the S/MIME signature and encryption framework; the OpenPGP application instead is designed to work with GnuPG. When I went to FOSDEM, I set up my new key using the card itself.

The card provides three keys: a signing key, an encryption key, and an authentication key; the first two are used for GnuPG, as usual; the third instead is something that you usually don’t handle with GnuPG… SSH authentication. The gpg-agent program can actually handle your standard RSA/DSA keys for SSH, but that’s generally not very useful; if combined with the OpenPGP smartcard, this comes very useful.

So first of all you need a compatible smartcard reader; thankfully the CCID protocol is pretty standard and should work fine; I’ve got luck and three out of three smartcard readers I have work fine; one is from an Italian brand (but most likely built in Taiwan or China), the other is a GemAlto PinPad, and the third is the one integrated in my Dell laptop, Broadcom BCM5880v3. The last one requires an updated firmware and a ccid package capable of recognizing it… the one in Gentoo ~arch is already patched so that it works out of the box. I got mine at Cryptoshop which seems a decent place to get them in Europe.

Out of experience, at least GnuPG seems to have problems dealing with pinpads, and quite a few pinpad-provided readers seem to have driver problems; so get a cheaper, but just as valid, non-pinpad reader.

On the software side, there isn’t much to need: GnuPG itself could use the CCID readers directly, but my best luck has been using pcsc-lite; just make sure your pcsc-lite does not use HAL but rather has libusb support directly, by setting -hal usb as USE flags for it. GnuPG has to be built with the smartcard USE flag; pcsc-lite USE flag will give you the dependency as well, but it does not change the build at all. Update: Matija noted that there is also the need to install app-crypt/ccid (which is the userspace driver of the CCID-based smartcard readers); for whatever reason I assumed it was already a dependency of the whole set but that is not the case.

Make sure the pcscd service is started with the system, you’re gonna need it.

To actually make use of the key properly you’re going to need to replace ssh-agent with gnupg-agent…. more interesting, GNOME-Keyring also replaces ssh-agent, but if you let it do so, it won’t handler your OpenPGP card auth key! So you’re going to override that. Since using keyring with this setup seem to be impossible, my solution is to use a simple wrapper which I now release under CC-BY license.

You got run this script on every shell and your X session as well, for this to work as intended (it is needed in X session so that it works with libvirt over SSH otherwise virt-manager will still try to get the key from gnome-keyring). To do so I added a source of that script from both my ~/.shrc file and my ~/.xsession file, and make sure the latter is called; to do so I have this:

# in both ~/.shrc and ~/.xsession:
. /path/to/gpg-agent-wrapper

# in /etc/X11/xinit/xinitrc.d/01-xsession
[ -f ${HOME}/.xsession ] && . ${HOME}/.xsession

The trick of the script is making sure that gpg-agent is not already running, that it does not collide with the current information, but also it takes care of overriding gnome-keyring (it could be also done by changing the priority of ~/.xsession to be higher than gnome-keyring), and ensures that the SSH Agent Forwarding works… and yes it works even if on the client there is gpg-agent used for SSH, which means it can forward the card’s authentication credentials over a network connection.

So here it is, should be easy enough to set up for anybody interested.

How special PAM supports gets added to Gentoo

You might wonder why the PAM support for special authentication method is somewhat lacking in Gentoo; the reason is that, mostly, I maintain PAM alone, which means that you get to use whatever I use myself most of the time. One of the things that I was very upset we didn’t support properly was the Smartcard/Token based authentication; unfortunately, while I got two smartcard readers in the past months to do some work, I hadn’t fetched a smartcard yet, and tokens seem to be quite difficult to find for end users like me.

Thanks to Gilles (Eva), I now have a token to play with, and that means I’m looking to write up proper support for token-based authentication (and thus, smartcard-based as well). This already started well, because I was able to get one patch (split in three) merged in pam_pkcs11 upstream (available in the gentoo 0.6.1-r1 ebuild), as well as cleaning up the ebuild to work just like it’s supposed to as a PAM ebuild (for instance not installing the .la files which are not used at all).

But since this is not yet ready to use, it’s easier if I show you how it works after a day or two of tweaking (video):

Yes today I was quite bored.

Please note that this is not really “production ready” in my opinion:

  • the pam_pkcs11 module uses the /etc/pam_pkcs11 directory for configuration, but almost all PAM modules use /etc/security for their configuration;
  • the pkcs11_eventmgr daemon has to be started by the user manually, but it uses a single, system-wide configuration file (/etc/pam_pkcs11/pkcs11_eventmgr.conf), this does not really seem to be the right way to handle it for me, but I’ll have to discuss that with upstream;
  • most likely we want to provide, based on USE flag or in a different ebuild, some scripts to handle the event manager more easily, for instance making it start on each X and console login, and making sure that the login is locked as soon as the key is removed;
  • the event manager polls for the card, which is using CPU and power for no real good reason; a proper way to handle this would require for udev to send signals on plug and remove so that the event manager can handle that; since the exact key needed is unlikely to be known at rules-generation time, this might require adding a central daemon monitoring all the smartcards and tokens and passing the information to registered event managers.

This mostly means that there’s going to be a long way to go before this is ready, and I’m pretty sure I’ll have to write a complete documentation on how to set it up, rather than just a blog post with a video, but at least it’s going to be feasible, at one point.

Please feel free to comment on whether the video is useful at all or not; I’m trying to experiment with less boring methods of explaining stuff related to Gentoo and free software in general, but I have no clue whether it’s working or not, yet.

In the land of smartcards

Even though I did post that I wanted to get onto hardware signatures I ended up getting an USB smartcard reader for a job that requires me to deal with some kind of smartcards; I cannot go much further on the matter right now though, so I’ll skip over most of the notes here.

Now, since I got the reader, but not yet most of the specifics I need to actually go on with the job, I’ve been playing with actually getting the reader to work with my system. Interestingly enough, as usual, the first problem is very Gentoo-specific: the init script does not work properly, and I’m now working on fixing that up.

But then the problem is to actually find a smart card to test with; in my haste I forgot about getting at least one or two smartcards to play with when I ordered the device, and now it’d be stupidly expensive to order them. Of course I’ll go around this time and get myself the Italian electronic ID card (CIE), but even that does not come cheap (€25, and a full morning wasted), and I cannot just do that right now.

So I went around to see what I had at home with a smartcard chip, after discarding my old, expired MasterCard (even though I thought about it before, I was warned against trying that), I decided to try with a GSM SIM card, which I had laying around (I had to get a new one to switch my current phone plan to a business subscriber plan; before I was using a consumer pre-paid plan).

Now, although I was able to test that the reader detects and initialises the card correctly (although it is not in the pcsc-tools database!), I wanted to see if it was actually possible to access it fully; luckily the page of a Gentoo user sent me to some software, written by an Italian programmer, that should do just that: monosim which, as you’d expect, is written in C# and Mono, which is good given I’m currently doing the same for another customer of mine.

Unfortunately, it seems like the mono problem comes up again: upstream never considered the fact that the libpcsclite.so ABI changes between different architectures, even on the same operating system. Not that I find that a good idea in general, since I always try to stick with properly-sized parameters (thanks stdint.h), but it happens, and we should get ready to actually resolve the problems when they appear.

Now, I really don’t even want to get started with all the mess that RMS have uncovered lately; just like I did a few years back, I replace the idealistic problems from Stallman with technical limitations, see for instance my post about “the java crap” (which – by the way – hasn’t finished being a problem, outlasting the idealistic problems).

And I’m still waiting for Berkeley DB to finish its testsuite, after more than twelve (12!) hours, on an eight core system, with parallel processes (I get five TCL processes to hog up the same amount of cores at almost any time). I don’t even want to think how long it would take on a single-core system. Once that’s done, I can turn the system down for some extraordinary maintenance.

Hardware signatures

If you read Planet Debian as well as this blog, you probably have noticed the number of Debian developers that changed their keys recently, after the shadows cast over the SHA-1 hash algorithm. It is debatable on whether this is an issue now or not, but that’s not what I want to discuss.

There are quite a few reasons why Debian developers are more interested in this than Gentoo developers; while we also sign manifests, there are quite a few things that don’t work that well in our security infrastructure, which we should probably pay more attention to (but I don’t want to digress now), so I don’t blame their consideration of tighter security.

I’m also considering the switch; while I have my key for quite a while, there are a few issues with it: it’s not signed by any Gentoo developer (I actually don’t think I have met anybody in person to be able to exchange documents and stuff), the Manifest signing key is not a subkey of my actual primary key (which by the way contains lots of data of my previous “personas” that don’t matter any longer), and so on so forth. Revoking this all and starting anew might be a good solution.

But, before proceeding, I want finally go get over with the thing and move to hardware cryptography if possible; I already expressed the interest before, but I never dug enough to find the important information, now I’m looking for that kind of information. And I want a solution that works in the broadest extension of cases:

  • I want it to work without SHA-1; I guess this starts already to be difficult; while it’s not clear whether SHA-1 is weak enough to be a vulnerability or not, being able to ignore the issue by using a different algorithm is certainly a desirable feature;
  • I want it to work with GnuPG and OpenSSH at least; if there is a way to get it to work with S/MIME it might also be a good idea;
  • I want it to work on both Linux and Mac OS X: I have two computers in my office: Yamato running Gentoo and Merrimac running OSX; I have to use both, and can’t do without either; I don’t care if I don’t have GnuPG working on OSX, I still need it to work with OpenSSH, since I would like to use it for remote access to my boxes;
  • as an extension to the previous point, I guess it has to be USB; not only I can switch it between the two systems (hopefully!), I’m also going to get a USB switch to use a single console between the two;

I guess the obvious solution would be a tabletop smartcard reader with one or more cards (and I could get my ID card to be a smartcard), but there is one extra point: one day I’m going to have a laptop again, what then? I was thinking about all-in-one tokens, but I have even less knowledge about those than I have about smartcards.

Can anybody suggest me a solution? I understand that the FSFE card only supports 1024 bit for the keys, which seems to be tied to weakness lately, no idea how much of that is true though, to be honest.

So, suggestions, very welcome!