SMS for two-factors authentication

Having spent now a few days at the 34C3 (I’ll leave comments over that to discussion to happen in front of a very warm coffee with a nice croissant to the side), I have heard a few times already presenters referencing the SS7 hack as an everyday security threat. You probably are not surprised I don’t agree with that being an everyday security issue, while still thinking this is a real security issue.

Indeed, while some websites refer to the SS7 hack as the reason not to use SMS for auth, at least you can find more reasonable articles talking about the updated NIST recommendation. Myself, the preference for TOTP (as it’s used in authenticator apps), is because I don’t need to be registered on a mobile network and I can use it to login while in flights.

I want to share some more thoughts that add up to the list of reasons to not use SMS as the second factor authentication. Because while I do not believe SS7 attack is something that all users should be accounting for in their day-to-day life, it is an addition to the list of reasons why SMS auth is not a good idea at all, and I don’t want people to think that, since they are unlikely to be attacked by someone leveraging SS7, they are okay with using SMS authentication instead.

The obvious first problem is reliability: as I said in the post linked above, SMS-based authentication requires you to have access to the phone with the SIM, and that it’s connected to the network and allowed to receive the SMS. This is fine if you never travel and you have good reception where you need to use this. But myself, I have had multiple situations in which I was unable to receive SMS (in flights as I said already, or while visiting China the first time, with 3 Ireland not giving access to any roaming partners to pay-as-you-go customers, or even at my own desk at my office with 3 UK after I moved to London).

This lack of reliability is unfortunate, but not by itself a security issue: it prevents you from accessing the account that the 2FA is set on, and that would sound like it’s doing what it’s designed to do, failing close. At the same time, this increases the friction of using 2FA, reducing usability, and pushing users, bit by bit, to stop using 2FA altogether. Which is a security problem.

The other problem is the ability to intercept or hijack those messages. As you can guess by what I wrote above, I’m not referring to SS7 hacks or equivalent. It’s all significantly more simple.

The first way to intercept the SMS auth messages is having access to the phone itself. Most phones, iOS and Android alike, are configured to show new text messages in the standby page. In some cases, only a part of the message is visible, and to have access to the rest of the message you’d have to unlock the phone – assuming the phone is locked at all – but 2FA authentication messages tend to be very short and to the point, showing the number in the preview, for ease of access. On Android, such a message can also be swiped away without unlocking the phone. An user that would be victim to this type of interception might have a very hard time noticing this, as nowadays it’s likely the SMS app is not opened very often, and a swiped-off notification would take time to be noticed1.

The hijacking I have in mind is a bit more complicated and (probably) noticeable. Instead of using the SS7 hack, you can just take over a phone number by leveraging the phone providers. And this can be done in (at least) two ways: you can convince the victim’s phone provider to reprovision the number to a new SIM card within the same operator, or you can port the number to a new operator. The complexity or easiness of these two processes change a lot between countries, some countries are safer than others.

For instance, the UK system for number portability is what I expect to be the more secure (if not the most userfriendly) I have seen. The first step is to get a Portability Authorization Code (PAC) for the number you want to port. You do that by calling the current provider. None of the three providers I had up to now in the UK had any way to get this code online, which is a tad safer, as a misplaced password cannot bring full access to the account line. And while the code could be “intercepted” the same way as I pointed out above for authentication codes, the (new) operator does get in touch with you reminding you when the portability will take place, and giving you a chance to contact them if it doesn’t sound right. In the case of Vodafone, they also send you an email when you request the PAC, meaning just swiping away the notification is not enough to hide the fact that it was requested in the first place.

In Ireland, a portability request completes in the span of less than an hour, and only requires you to have (brief) access to the line you want to take over, as the new operator will send you a code to confirm the request. Which means the process, while being significantly easier for the customers, is also extremely insecure. In Italy, I actually went to the store with the line that I wanted to port, and I don’t remember if they asked anything but my IDs to open the new line. No authentication code is involved at all, so if you can fake enough documents, you likely can take over any lines. I do not remember if they notified my old SIM card before the move. I have not tried number portability in France, but it appears you can get the RIO (the equivalent transfer code) from the online system of Free at the very least.

The good thing about all the portability processes I’ve seen up to now is that at least they do not drop a new number on the old SIM card. I was told that this is (or was at least) actually common for US providers, where porting a number out just assign a new number to the old SIM. In that case, it would probably take a while for a victim to notice they had their account taken over. And that would not be a surprise.

If you’re curious, you can probably try that by yourself. Call your phone provider from another number than your own, see how many and which security questions they ask you to identify that it is actually their customer calling, instead of a random stranger. I think the funniest I’ve had was Three Ireland, that asked me for the number I “recently” sent a text message to, or a recent call made or received — you can imagine that it’s extremely easy to force you to get someone to send you a text message, or have them call you, if you’re close enough to them, or even just have them pick up the phone and reporting to the provider that you were last called by the number you used.

And then there is the other interesting point of SMS-based authentication: the codes last longer in time. A TOTP has a short lifetime by design, as it’s time based. Add some fuzzing, most of the implementations I’ve seen allow a single code to be valid for 2× the time the code is displayed on the authenticator, by accepting a code from the past, or from the future, generally at half the expected duration. Since the normal case has the code lasting for 60 seconds, they would accept a code 30 seconds before it’s supposed to be used, and 30 seconds after. But text messages can (and do) take much longer than that.

And this is particularly useful for attackers of systems that do not implement 2FA correctly. Entering the wrong OTP most of the time does not invalidate a login attempt, at least not on the first mistake, because users can mistype, or they can miss the window to send an OTP over. But sometimes there are egregious errors, such as that made by N26, where they neither ratelimited, nor invalidated the OTP requests, allowing a simple bruteforcing of a valid OTP. Since TOTP change without requesting a new code, bruteforcing those give you a particularly short time span of viability… SMS on the other hand, open for a much larger window of opportunity.

Oh and remember the Barclays single-factor-authentication? How long do you think it would take to spoof the outgoing number of the SMS that needs to be sent (with the text “Y”) to authorize the transaction, even without having access to the text message that was sent?


  1. This is an argument for using another of your messaging apps to receive text messages, whether it is Facebook Messenger, Hangouts or Signal. Assuming you use any of those on a day-to-day basis, you would then have an easy way to notice if you received messages you have not seen before.
    [return]

Barclays and the single factor authentication

In my previous post on the topic I have barely touched on one of the important reasons why I did not like Barclays at all. The reason for that was that I still had money into my account with them, and I wanted to make sure that was taken care of before lamenting further on the state of their security. As I managed to close my account now, I should go on and discuss this further, even though I have touched upon the major topics of this.

Barclays online banking system relies heavily on what I would define as “single factor authentication”.

Usually, you define authentication factors as things you have or things you know. In the case of Barclays, the only thing they effectively rely upon is “access to the debit card”. Okay, technically you could say that by itself it’s a two-factor system, as it requires access to the debit card and to its PIN. And since the EMV-CAP protocol they use for this factor executes directly on the chipcard, it is not susceptible to the usual PIN-stripping attacks as most card fraud with chip-and-pin cards uses.

But this does not count for much when the PIN of the card they issued me was 7766 — and to lament of that is why I waited to close the account and give them back the card. It seems like there’s a pattern of banks issuing “easy to remember” 4-digit PINs: XYYX, XXYY, etc. One of my previous (again, cancelled) cards had a PIN terribly easy to remember for a computerist, at least not for the average person though: 0016.

Side note: I have read someone suggesting to badly scribbled a wrong PIN on the back of a card as a theft prevention. Though I like that idea, I’m just afraid the banks won’t like it anyway. Also it would take some work to make the scribble being easily misunderstood for different digits so that they can try the three times needed to block it.

You access Barclays online banking account through the use of the Identify method provided by CAP, which means you put the card into the reader, provide the PIN, and you get an 8-digits identifier that can be used to login on the website. Since I’m no expert of how CAP works internally, I will only venture a guess that this is similar to a counter-based OTP, as the card has no access to a real-time clock, and there is no challenge provided for this information.

This account access sounds secure, but it’s really not any more secure than an username and password, at least when it comes to dealing with phishing. You may think that producing a façade that shows the full Barclays login, and proxies the responses in real time is a lot of work, but the phishing tools are known for being flexible, and they don’t really need to reproduce the whole website, just the parts they care about getting data from. The rest can easily be proxied as it is without any further change, of course.

So what can we do once you can fool someone into logging in to the bank? Well, you can’t really do much, as most of the actions require further CAP confirmation: wires, new standing orders, and so on so forth. You can, though, get a lot of information about the victim, including enough proofs of address or identity that you can really mess with their life. It also makes it possible to cancel things like standing orders to pay for rent, which would be quite messy to deal with for most people — although most of the phishing is not done for the purpose of messing with people, and more to get their money.

As I said, for sending money you need to have access to the CAP codes. That includes having access not only to the device itself, but also the card and the PIN. To execute those transactions, Barclays will ask you to sign a transaction by providing the CAP device with the account number and the amount to wire. This is good and it’s pretty hard to tamper with, hopefully (I do not make any guarantee on the implementation of CAP), so even if you’re acting through a proxy-phishing site, your wires are probably safe.

I say probably, because the way the challenge-response is implemented, only the 8-digits account number is used during the signature. If the phishers are attacking a victim that they studied for long enough, which may be the case when attacking businesses, you could know which account they pay every month manually, and set up an account with the same number at a different bank (different sort code). The signature would be valid for both.

To be fair to Barclays, implementing the CAP fully, the way they did here, is actually more secure than what Ulster Bank (and I assume the rest of RBS Group) does, with an opaque “challenge” token. While this may encode more information, the fact that it’s opaque means there is no way for the user to know whether what they are signing is indeed what they meant to.

Now, these mitigations are actually good. They require continuous access to the card on request, and that makes it very hard for phishing to just keep using the site in the background after the user logged in. But they still rely on effectively a single factor. If someone gets a hold of the card and the PIN (and we know at least some people will write the real one on the back of the card), then it’s game over: it’s like the locks on my flat’s door: two independent locks… except they use the same key. Sure, it’s a waste of time to pick both, so it increases the chances a neighbour would walk in on wannabe burglars trying to open the apartment door. But there’s a single key, I can’t just use two separate keychains to make sure a thief would only grab one of the two, and if anyone gets it from me, well, it’s game over.

Of course Barclays knows that this is not enough, so they include a risk engine. If something in the transactions don’t comply with their profile of your activity, it’s considered risky and they require an additional verification. This verification happens to be in form of text messages. I will not suggest that the problem with these is with GSM-layer attacks, as that is still not (yet) in the hands of the type of criminals aiming at personal bank accounts, but there is at the very least the risk that a thieve would get a handle of my bag with both my card and my phone, so the only “factors” that are still in my head, rather than tied to the physical objects, are the (provided) PIN of the card, and the PIN of the phone.

This profile fitting is actually the main reason why I got frustrated with Barclays: since I had just opened the account, most of the transactions were all “exceptional”, and that is extremely annoying. This was compounded by the fact that my phone provider didn’t even let me receive SMS from the office, due to lack of coverage (now fixed), and the fact that at least for wires, the Barclays UI does not warn you to check your phone!

There is also the problem with the way Barclays handle these “exceptional transactions”: debit card transactions are out-and-out rejected. The Verified by Visa screen tells you to check your phone, but the phone will only ask you if it was your transaction or not, and after you confirm it is, it’ll ask you to “retry in a couple of minutes” — retrying too quickly will lead to the transactions being blocked by the processor directly, with a temporary card lock. The wire transfer one will unblock the execution of the wire, which is good, but it can also push the wire to after the cut-off time for non-“Faster Payments” wires.

Update (2017-12-30): since I did not make this very clear, I have added a note about this at the bottom of my new post, about the fact hat confirming these transactions only need you to spoof the sender, since the content and destination of the text message to send are known (it only has to say “Y”, and it’s always to the same service number). So this validation should not really count as a second factor authentication for a skilled attacker.

These are all the reasons for which I abandoned Barclays as fast as I could. Some of those are actually decent mitigation strategies, but the fact that they do not really increase security, while increasing inconvenience, makes me doubt the validity of their concerns and threat models.

LastPass Authenticator, Cloud Backup, and you

I’m still not sure how it is that over the past two years I consider myself a big expert of 2FA, it probably has to do with having wanted to implement this for a very long time for the work I did before my current employer, and the fact that things keeps changing, and bad advices keep being sent out. But as it happens here is something more.

A couple of months ago, LastPass announced Cloud Backup for LastPass Authenticator, which effectively means you can upload your TOTP shared key (which is used for the generation of the one-time codes you see in the authenticator app) to LastPass so that you can log in on a different device (phone) and have access to all your previously set up 2FA credentials.

It’s marketed as a solution to the TOTP dance of changing devices which is always a definite pain in the neck, and it’s not alone. Authy (which many may know as the authentication system for Humble Bundle or Twilio) has a similar feature for their app. I have honestly wondered why Google Authenticator still does not have a similar feature either, but let’s not go there for now.

The obvious question that came to me, and probably to anyone with a little bit of security knowledge, when this was announced is «who in their sane mind would use this?». But before we get to that question, let me explain something that, it turns out, is not obvious to many people.

2-factor authentication, also referred to 2-step verification, – which refer to slightly different semantics but I’m not here splitting hairs – is the idea that you need “something you have” (or “something you have access to”) as well as “something you know”. Your username would be effectively the zeroth factor, followed by your password as first factor, and the magical second factor. For the most part at first the reference to this “something you have” related to physical devices, such as the RSA SecurID and other “tamper proof” tokens that display a one-time code, either changing every given amount of seconds or changing every time you pressed a button.

Nowadays, the majority of 2FA users use an authenticator app, such as the already mentioned LastPass Authenticator, Authy and Google Authenticator apps. These run on Android or iOS and provide TOTP-based one-time codes that are used for second factor authentication. These are not really as secure as one would like, and indeed they are still phishable. For proper security, services should really just use U2F.

So if you’re using an authenticator app, any authenticator app, you have your second factor, which is not the device itself! Rather, the second factor is the shared key, that the system gave you through the QR code (in most cases, at least), and that is combined with the current timestamp to give you a one time code (which is why it’s called TOTP: Time-based One Time Password). This is a very important distinction because, as I showed in a previous post, at least one of the services will not re-key the shared key when you set up authenticator on a new device, which means people with access to your old device still have your “factor”.

One obvious problem one might think of is that your phone is likely connected to your Google account, so if you have your authenticator on it, you may get an impression that factors 0, 1 and 2 are all on the same device. Of course that is not the case, because you can’t retrieve the password from a logged in Google account on a mobile device. In place of a password, a single-use token is provided (that is true not only for Google, but for almost every single app out there), which is tied to the device in some form. And that token effectively replaces both factors (password and TOTP). So you really have factors 0 and 2.

Why do I bring up this problem? Well, here’s the thing with LastPass: while all LastPass’s assurances are believable, and they may not have made huge mistakes with their authenticator app as they did for their extensions in the past, you have a basket with multiple eggs on it already, do you want to add all of them for real? Assume I put a TOTP key onto LastPass Authenticator with Cloud Backup, and the username and password for that account are also stored in LastPass, and that someone manages to log into my LastPass (how, that’s a different problem, but just assume they do). Now they have all three factors at their disposal.

Now it is true that LastPass forces you to use 2FA yourself for an account that has the Authenticator Cloud Backup enabled, but it is not clear to me if that means that you can’t use a physical YubiKey for that and whether the Cloud Backup would become unrestorable if you decide to disable 2FA, for instance because your device got stolen (in which case, well, you just lost all your 2FAs). Having had to disable 2FA for LastPass once already (I was in China without my main Authenticator device), it effectively relies on you having access to your registered email address… which for someone who may gain access to my LastPass password, would probably be a cakewalk. It’s also not clear to me whether having the ability to request arbitrary information from the extension (as Tavis managed to do at least once) would allow you to fetch the Cloud Backup shared keys too. In which case, the whole security sandcastle is gone.

An astute reader could ask why am I even considering this a valid option and spending my time figuring out how insecure this is. The answer to that is a bit complicated but I’ll still talk about my reasoning. 2FA is good, and everybody should be using it. On the other hand, 2FA is also cumbersome and it feels it’s not worth the hassle. For services that support U2F, it’s very very easy: you just register a couple of U2F devices (e.g. one you keep at home in a safe, and one you keep with your housekeys), and you would have access to at least one at any time. Minimal amount of work needed, major security.

But would you really do the TOTP registration dance I talked above for things like (and I’m checking my own history here), IFTTT? Maybe, but probably not. It’s more likely that you decide that for those, it’s easier to get SMS as second factor, and you prove you have access to your phone number. Unfortunately this is both a problem, for instance when you fly a lot. It’s also not secure at all, as people have by now repeatedly demonstrated, both because intercepting SMS is getting more and more trivial, and because lots of telcos can be easily manipulated into giving up access to a phone number, particularly in the US, where the phone is tied to a device, rather than to a SIM card.

I also do something like this, except the phone number is a Google Voice number (similar to Google Project Fi), which makes it easier for me to receive the codes wherever I am in the world, and also to receive them when I’m only connected to the Internet and not to cell service. And since my Google account is protected by U2F, and the password is not in LastPass, I feel safe… enough for most of the smaller services. But not all the services allow me to send the code to Google Voice (some of the bridges still fail there), so I have been hoping for some cloud backup option to at least have an easy way to move around a subset of my TOTP shared keys.

Because accounts like EA, AutoDesk (I have no licenses for them) or WordPress (I have no sites there), are probably low-targets enough that I wouldn’t mind using an imperfect (security-wise) solution such as LastPass Authenticator Cloud Backup… except if it defeated the whole purpose of 2FA, which it clearly does, when their password is also there. This is funny because it means I’d feel safer to keep a copy of my Google account TOTP key in LastPass as its password is not in LastPass.

As I noted above, Authy does have something like that as well. Their main API, used by Humble Bundle and Twitch, relies on server-side hosting of the keys, with extra seeds that are given to each device if you install their app and use it. Setting up an account is trivial as it relies on a phone number as username and a one-time code they send to that phone number — effectively it’s a single factor authenticator. This does mean that its base security is not any better than the security of an SMS. Which makes it hardly useful, except for being easier to access when you somehow have no access to SMS. Or, as Twitch will tell you, to “cut down on those expensive SMS” — yeah I always forget in the US you pay to receive text messages. Strange country.

So while I don’t think Authy is a very sensible or secure option, as it has the same vulnerability as SMS do, their system does have an option to backup your TOTP shared keys, which includes an extra password in addition to the SMS auth. Of course by virtue of what I Was saying earlier, you should not be storing that password in LastPass, or you’ll have put effectively one basket into the other.

What does this mean at the end of day? Well, it’s obvious people are finding 2FA hard, and they want to provide better solutions that still work for users and allow them to be protected by 2FA. Unfortunately it does not seem the current options available, except for U2F are very secure or reliable, and U2F is far from widespread.

My personal perfect end state would be for websites to allow me to log in based on Facebook or Google account, the passwords of which are not stored in LastPass, and then asking me for a 2FA token that could be stored in LastPass then. In that case, LastPass would still only have one factor, same as they do right now with the password, but since the first factor is not “password” but “access to {other} account”, that is one less thing to keep in mind.

We’ll see what the future brings ups.

Let’s have a talk about TOTP

You probably noticed that I’m a firm believer in 2FA, and in particular on the usefulness of U2F against phishing. Unfortunately not all the services out there have upgraded to U2F to protect their users, though at least some of them managed to bring a modicum of extra safety against other threats by implementing TOTP 2FA/2SV (2-steps verification). Which is good, at least.

Unfortunately this can become a pain. In a previous post I have complained about the problems of SMS-based 2FA/2SV. In this post I’m complaining about the other kind: TOTP. Unlike the SMS, I find this is an implementation issue rather than a systemic one, but let’s see what’s going on.

I have changed my mobile phone this week, as I upgrade my work phone to one whose battery lasts more than half a day, which is important since my job requires me to be oncall and available during shifts. And as usual when this happens, I need to transfer my authenticator apps to the new phone.

Some of the apps are actually very friendly to this: Facebook and Twitter use a shared secret that, once login is confirmed, is passed onto their main app, which means you just need any logged in app to log in again and get a new copy. Neat.

Blizzard was even better. I have effectively copied my whole device config and data across with the Android transfer tool. The Battle.net authenticator copied over the shared key and didn’t require me to do anything at all to keep working. I like things that are magic!

The rest of the apps was not as nice though.

Amazon and Online.net allow you to add at any time a new app using the same shared key. The latter has an explicit option to re-key the 2FA to disable all older apps. Amazon does not tell you anything about it, and does not let you re-key explicitly — my guess is that it re-keys if you disable authentication apps altogether and re-enable it.

WordPress, EA/Origin, Patreon and TunnelBroker don’t allow you to change the app, or get the previously shared key. Instead you have to disable 2FA, then re-enable it. Leaving you “vulnerable” for a (hopefully) limited time. Of these, EA allows you to receive the 2SV code by email, so I decided I’ll take that over having to remember to port this authenticator over every time.

If you remember in the previous post I complained about the separate Authenticator apps that kept piling up for me. I realized that I don’t need as many: the login approval feature, which effectively Microsoft and LastPass provide, is neat and handy, but it’s not worth having two more separate apps for it, so I downgraded them to just use normal TOTP on the Google Authenticator app, which gets me the Android Wear support to see the code straight on my watch. I have particularly failed to remember when I last logged into a Microsoft product except for setting up the security parameters.

Steam on the other hand, was probably the most complete disaster of trying to migrate. Their app, similarly to the Battle.net one, is just a specially formatted TOTP with a shared key you’re not meant to see. Unfortunately to be able to move the Authenticator to a new phone, you need to disable it first — and you disable it from the logged-in app that has it enabled. Then you can re-enable it on a new phone. I assume there is some specific way to get recovery if that does not work, too. But I don’t count on it.

What does this all mean? TOTP is difficult, it’s hard for users, and it’s risky. Not having an obvious way to de-authenticate the app is bad. If you were at ENIGMA, you could have listened to a talk that was not recorded, on ground of the risky situations there described. The talk title was «Privacy and Security Practices of Individuals Coping with Intimate Partner Abuse». Among various topic that the talk touched upon, there was an important note on the power of 2FA/2SV for people being spied upon to gain certainty that somebody else is not logging in on their account. Not being able to de-authenticate TOTP apps goes against this certainty. Having to disable your 2FA to be able to change it to a new device makes it risky.

Then there are the features that become a compromise between usability and paranoia. As I said I love the Android Wear integration for the Authenticator app. But since the watch is not lockable, it means that anybody who could have access to my watch while I’m not attentive could have access to my TOTP. It’s okay for my use case, but it may not be for all. The Google Authenticator app also does not allow you to set a passcode and encrypt the shared keys, which means if you have enabled developer mode, run your phone unencrypted, or have a phone that is known vulnerable, your TOTP shared keys can be exposed.

What does this all mean? That there is no such thing as the perfect 100% solution that covers you against all possible threat models out there, but some solutions are better than others (U2F) and then compromises depend on what you’re defending against: a relative, or nation-states? If you remember I already went over this four years ago, and again the following year, and the one after talking about threat models. It’s a topic that I’m interested in, if it was not clear.

And a very similar concept was expressed by Zeynep Tufekci when talking about “low-profile activists”, wanting to protect their personal life, rather than the activism. This talk was recorded and is worth watching.

Password Managers and U2F

Yet another blog post that starts with a tweet, or two. Even though I’m not by either trade or training clearly a security person, I have a vested interest in security as many of you know, and I have written (or ranted) a lot about passwords and security in the past. I have in particular argued for LastPass as a password manager, and at the same time for U2F security keys for additional security (as you can notice in this post as I added an Amazon link to one).

The other week, a side-by-side of the real PayPal login window and a phishing one that looked way too similar was posted, and I quoted the tweet asking PayPal why do they still not support U2F — technically PayPal does support 2-factors authentication, but that as far as I know that is only for US customers, and it is based on a physical OTP token.

After I tweeted that, someone who I would have expected to understand the problem better, argued that password managers and random passwords completely solve the phishing problem. I absolutely disagree and I started arguing it on Twitter, but the 140 characters limit makes it very difficult to provide good information, and particularly to refer to it later on. So here is the post.

The argument: you can’t phish a password you don’t know, and using password managers, you don’t have to (or have a way to) know that password. Works well in theory, but let’s see how that does not actually work.

Password managers allow you to generate a random, complex password (well, as long as the websites allow you to), and thanks to features such as autofill, you never actually get to see the password. This is good.

Unfortunately, there are plenty of cases in which you need to either see, read, or copy to clipboard the password. Even LastPass, which has, in my opinion, a well defined way to deal with “equivalent domains”, is not perfect: not all Amazon websites are grouped together, for instance. While they do provide an easy way to add more domains to the list of equivalency, it does mean I have about 30 of them customised for my own account right now.

What this means is that users are actually trained to the idea that sometimes the autofill won’t work because the domain is just not in the right list. And even when it is, sometimes the form has changed, and autofill just does not work. I have seen plenty of those situations myself. And so, even though you may not know the password, phishing works if it convinces you that the reason why autofill is not working is not because the site is malicious, but just because the password manager is broken/not working as intended/whatever else.

This becomes even more likely when you’re using one of the more “open” password managers. Many think LastPass is bad, not just because they store your password server-side (encrypted) but also because it interacts directly with the browser. After all, the whole point of the LostPass vulnerability was that UI is hard. So the replacements for geeks who want to be even more secure usually is a separate app that requires you to copy-paste your password from it to the browser. And in that case, you may not know the password, but you can still be phished.

If you want to make the situation even more ridiculous, go back to read my musings on bank security. My current bank (and one of my former ones) explicitly disallow you from using either autofill or copy-paste. Worse they ask you to fill in parts of your password, e.g. “the 2nd, 4th, 12th character of your password” — so you either end up having to use a mnemonic password or you have to look at your password manager and count. And that is very easily phisable. Should I point out that they insist that this is done to reduce chances of phishing?

I have proposed some time ago a more secure way to handle equivalent domains, in which websites can feed back information to the password managers on who’s what. There is some support for things like this on iOS at least, where I think the website can declare which apps are allowed to take over links to themselves. But as far as I know, even now that SmartLock is a thing, Chrome/Android do not support anything like that. Nor does LastPass. I’m sad.

Let me have even more fun about password managers, U2F, and feel okay with having a huge Amazon link at the top of this post:

//platform.twitter.com/widgets.js

Edit: full report is available for those who just don’t believe the infographic.

This is not news, I have written about this over two years ago after U2F keys were announced publicly — I have been using one before that, that is not mystery clearly. Indeed, unlike autofill and copy-paste, U2F identification does not involve interaction with the user-person, but is rather tied to the website’s own identification, which means it can’t be phished.

Phishing, as described in the NilePhish case above, applies just as equally to SMS, Authenticator apps and push-notifications, because what the user may perceive is just a little bit more delay. It’s not impossible, though a bit complicated, for a “sophisticated” (or well-motivated) attacker to just render the same exact page as the original site, defeating all those security theatre measure such as login-request IDs, custom avatars and custom phrases — or systems that ask you the 4th, 9th and 15th characters of your password.

User certificates, whether as a file enrolled in the operating system or on a smartcard, are of course an even stronger protection than U2F, but having used them, they are not really that friendly. That’s a topic for another day, though.

Logging in offline

Over two years ago, I described some of the advantages of U2F over OTP when FIDO keys were extremely new and not common at all. I have since moved most of my login access that support it to U2F, but the amount of services support it is still measly, which is sad. On the other hand, just a couple of days ago Facebook added support for it which is definitely good news for adoption.

But there is one more problem with the 2-factor authentication or, as some services now more correctly call it, 2-step verification: the current trend of service-specific authentication apps, not following any standard.

Right now my phone has a number of separate apps that are either dedicated as authentication app, or has authentication features built into a bigger app. There are pros and cons with this approach of course, but at this point I have at least four dedicated authorization apps (Google Authenticator, LastPass Authenticator, Battle.net Authenticator and Microsoft’) plus a few other apps that simply include authentication features in the same service client application.

Speaking of Microsoft’s authenticator app, which I didn’t like above. As of today what I had installed, and configured, was an app called “Microsoft Account” – when I went to look for a link I found out that it’s just not there anymore. It looks like Microsoft simply de-listed the application from the app store. Instead a new Microsoft Authenticator is now available, taking over the same functionality. It appears this app comes from their Azure development team, from the Play Store ID, but more important it is the fourth app that appears just as Authenticator on my phone.

Spreading the second factor authentication across different applications kind of make sense: since the TOTP/HOTP (from now I will only call them TOTP, I mean both) system relies on a shared key generated when you enroll the app, concentrating all the keys into a single application is clearly a bit of a risk – if you could easily access the data of a single authentication app and fetch all of its keys, you don’t want it to bring you access to all the services.

On the other hand, having to install one app for the service and one for the authentication is … cumbersome. Even more so when said authentication app is not using a standard procedure, case in point being the Twitter OTP implementation.

I’m on a plane, I have a “new” laptop (one I have not logged on Twitter with). I try to login, and Twitter nicely asks me for the SMS they sent to my Irish phone number. Ooops, I can’t connect phone service from high up in the air! But fear not it tells me that I can give them a code from the backups (on a different laptop, encoded with a GPG key I have with me but not at hand in the air) or a code from the Twitter app, even if I’m offline.

Except, you need first to have it set up. And that you can’t do offline. But turns out if you just visit the same page while online it does initialize and then work offline from them on. Guess when you may want to look for the offline code generator for the first time? Compare with the Facebook app, that also includes a code generator: once you enable the 2-step verification for Facebook, each time you log in to the app, a copy of the shared key is provided to the app, so every app will generate the same code. And you don’t need to request the code manually on the apps. The first time you need to login, with the phone offline, you’ll just have to follow the new flow.

Of course, both Facebook and Twitter allows you to add the code generator to any authenticator TOTP app. But Facebook effectively set that up for you transparently, on as many devices as you want, without needing a dedicated authentication app, just have any logged in Facebook app generate the code for you.

LastPass and Microsoft authenticator apps are dedicated, but both of them also work as a generic OTP app. Except they have a more user-friendly push-approval notification for their own account. This is a feature I really liked in Duo, and one that, with Microsoft in particular, makes it actually possible to log in even where otherwise the app would fail to log you in (like the Skype app for Android that kept crashing on me when I tried to put in a code). But the lack of standardization (at least as far as I could find) requires you have separate app for each of these.

Two of the remaining apps I have installed (almost) only for authentication are the Battle.net and Steam apps. It’s funny how the gaming communities and companies appears to have been those pushing the hardest at first, but I guess that’s not too difficult to imagine when you realize how much disposable money gamers tend to have, and how loud they can be when something’s not to their liking.

At least the Steam app tries to be something else beside an authenticator, although honestly I think it falls short: finding people with it is hard, and except for the chat (that I honestly very rarely use on desktop either) the remaining functions are so clunky that I only open the app to authenticate requests from the desktop.

Speaking of gaming community and authentication apps, Humble Bundle has added 2FA support some years ago. Unfortunately instead of implementing a standard TOTP they decided to use an alternative approach. You can choose between SMS and a service called Authy. The main difference between the Authy service and a TOTP app is that the service appears to keep a copy of your shared key. They also allow you to add other TOTP keys, and because of that I’m very unhappy with the idea of relying such a service: now all your key are not only concentrated on an app on your phone, but also on a remote server. And the whole point of using 2FA, for me, is that my passwords are stored in LastPass.

There is one more app in the list of mostly-authenticator apps: my Italian bank’s. I still have not written my follow up but let’s just say that my Italian bank used TOTP-token authentication before, and have since moved to an hybrid approach, one such authentication system is their mobile app, which I can use to authenticate my bank operations (as the TOTP-token expired over a year ago and have not replaced yet). It’s kind of okay, except I really find that bank app too bothersome to use and never bother using it right now.

The remaining authentication systems either send me SMS or are configured on Google Authenticator. In particular, for SMS, the most important services are configured to send me SMS to my real Irish phone number. The least important ones, such as Humble Bundle itself, and Kickstarter, which also insist on not even letting me read a single page without first logging in, send their authentication code to my Google Voice phone number, so they require an Internet connection, but that also means I can use them while on a flight.

Oh yes, and of course there are a couple of services for which the second factor can be an email address, in addition, or in place, of a mobile phone. This is actually handy for the same reason why I send them to Google Voice: the code is sent over the Internet, which means it can reach me when I’m out of mobile connectivity, but still online.

As for the OTP app, I’m still vastly using the Google Authenticator app, even though FreeOTP is available: the main reason is that the past couple of years finally made the app usable (no more blind key-overwrites when the username is the only information provided by the service, and the ability to change the sorting of the authenticator entries). But the killer feature in the app for me is the integration with Android Wear. Not having to figure out where I last used the phone to log in on Amazon, and just opening the app on the watch makes it much more user friendly – though it could be friendlier if Amazon supported U2F at this point.

I honestly wonder if a multiple-weeks battery device, whose only service would be to keep TOTP running, would be possible. I could theoretically use my old iPod Touch to just keep an Authenticator app on, but that’d be bothersome (lack of Android Wear) and probably just as unreliable (very shoddy battery). But a device that is usually disconnected from the network, only dedicated to keep TOTP running would definitely be an interesting security level.

What I can definitely suggest is making sure you get youserlf a FIDO U2F device, whether it is the Yubico, the cheaper, or the latest BLE-enabled release. The user friendliness over using SMS or app codes makes up for the small price to pay, and the added security is clearly worth it.

Why is U2F better than OTP?

It is not really obvious to many people how U2F is better than OTP for two-factor authentication; in particular I’ve seen it compared with full-blown smartcard-based authentication, and I think that’s a bad comparison to do.

Indeed, since the Security Key is not protected by a PIN, and the NEO-n is designed to be semi-permanently attached to a laptop or desktop. At first this seems pretty insecure, as secure as storing the authorization straight into the computer, but it’s not the case.

But let’s start from the target users: the Security Key is not designed to replace the pure-paranoia security devices such as 16Kibit-per-key smartcards, but rather the on-phone or by-sms OTPs two-factor authenticators, those that use the Google Authenticator or other opensource implementations or that are configured to receive SMS.

Why replacing those? At first sight they all sound like perfectly good idea, what’s to be gained to replace them? Well, there are plenty of things, the first of being the user friendliness of this concept. I know it’s an overuse metaphor, but I do actually consider features on whether my mother would be able to use them or not — she’s not a stupid person and can use a computer mostly just fine, but adding any on more procedures is something that would frustrate her quite a bit.

So either having to open an application and figure out which of many codes to use at one time, or having to receive an SMS and then re-type the code would be not something she’d be happy with. Even more so because she does not have a smartphone, and she does not keep her phone on all the time, as she does not want to be bothered by people. Which makes both the Authenticator and SMS ways not a good choice — and let’s not try to suggests that there are way to not be available on the phone without turning it off, it would be more to learn that she does not care about.

Similar to the “phone-is-not-connected” problem, but for me rather than my mother, is the “wrong-country-for-the-phone” problem: I travel a lot, this year aiming for over a hundred days on the road, and there are very few countries in which I keep my Irish phone number available – namely Italy and the UK, where Three is available and I don’t pay roaming, when the roaming system works… last time I’ve been to London the roaming system was not working – in the others, including the US which is obviously my main destination, I have a local SIM card so I can use data and calls. This means that if my 2FA setup sends an SMS on the Irish number, I won’t receive it easily.

Admittedly, an alternative way to do this would be for me to buy a cheap featurephone, so that instead of losing access to that SIM, I can at least receive calls/SMS.

This is not only a theoretical. I have been at two conferences already (USENIX LISA 13, and Percona MySQL Conference 2014) and realized I cut myself out of my LinkedIn account: the connection comes from a completely different country than usual (US rather than Ireland) and it requires reauthentication… but it was configured to send the SMS to my Irish phone, which I had no access to. Given that at conferences is when you meet people you may want to look up on LinkedIn, it’s quite inconvenient — luckily the authentication on the phone persists.

The authenticator apps are definitely more reliable than that when you travel, but they also come with their set of problems. Beside the not complete coverage of services (LinkedIn noted above for instance does not support authenticator apps), which is going to be a problem for U2F as well, at least at the beginning, neither Google’s or Fedora’s authenticator app allow you to take a backup of the private keys used for OTP authentication, which means that when you change your phone you’ll have to replace, one by one, the OTP generation parameters. For some services such as Gandi, there is also no way to have a backup code, so if you happen to lose, break, or reset your phone without disabling the second factor auth, you’re now in trouble.

Then there are a few more technical problems; HOTP, similarly to other OTP implementations, relies on shared state between the generator and the validator: a counter of how many times the code was generated. The client will increase it with every generation, the server should only increase it after a successful authentication. Even discounting bugs on the server side, a malicious actor whose intent is to lock you out can just make sure to generate enough codes on your device that the server will not look ahead enough to find the valid code.

TOTP instead relies on synchronization of time between server and generator which is a much safer assumption. Unfortunately, this also means you have a limited amount of time to type your code, which is tricky for many people who’re not used to type quickly — Luca, for instance.

There is one more problem with both implementations: they rely on the user to choose the right entry and in the list and copy the right OTP value. This means you can still phish an user to type in an OTP and use it to authenticate against the service: 2FA is a protection against third parties gaining access to your account by having your password posted online rather than a protection against phishing.

U2F helps for this, as it lets the browser to handshake with the service before providing the current token to authenticate the access. Sure there might still be gaps on is implementation and since I have not studied it in depth I’m not going to vouch for it to be untouchable, but I trust the people who worked on it and I feel safer with it than I would be with a simple OTP.

Packaging for the Yubikey NEO, a frustrating adventure

I have already posted a howto on how to set up the YubiKey NEO and YubiKey NEO-n for U2F, and I promised I would write a bit more on the adventure to get the software packaged in Gentoo.

You have to realize at first that my relationship with Yubico has not always being straightforward. I have at least once decided against working on the Yubico set of libraries in Gentoo because I could not get a hold of a device as I wanted to use it. But luckily now I was able to place an order with them (for some two thousands euro) and I have my devices.

But Yubico’s code is usually quite well written, and designed to be packaged much more easily than most other device-specific middleware, so I cannot complain too much. Indeed, they split and release separately different libraries with different goals, so that you don’t need to wait for enough magnitude to be pulled for them to make a new release. They also actively maintain their code in GitHub, and then push proper make dist releases on their website. They are in many ways a packager’s dream company.

But let’s get back to the devices themselves. The NEO and NEO-n come with three different interfaces: OTP (old-style YubiKey, just much longer keys), CCID (Smartcard interface) and U2F. By default the devices are configured as OTP only, which I find a bit strange to be honest. It is also the case that at the moment you cannot enable both U2F and OTP modes, I assume because there is a conflict on how the “touch” interaction behaves, indeed there is a touch-based interaction on the CCID mode that gets entirely disabled once enabling either of U2F or OTP, but the two can’t share.

What is not obvious from the website is that to enable U2F (or CCID) modes, you need to use yubikey-neo-manager, an open-source app that can reconfigure the basics of the Yubico device. So I had to package the app for Gentoo of course, together with its dependencies, which turned out to be two libraries (okay actually three, but the third one sys-auth/ykpers was already packaged in Gentoo — and actually originally committed by me with Brant proxy-maintaining it, the world is small, sometimes). It was not too bad but there were a few things that might be worth noting down.

First of all, I had to deal with dev-libs/hidapi that allows programmatic access to raw HID USB devices: the ebuild failed for me, both because it was not depending on udev, and because it was unable to find the libusb headers — turned out to be caused by bashisms in the configure.ac file, which became obvious as I moved to dash. I have now fixed the ebuild and sent a pull request upstream.

This was the only real hard part at first, since the rest of the ebuilds, for app-crypt/libykneomgr and app-crypt/yubikey-neo-manager were mostly straightforward — only I had to figure out how to install a Python package as I never did so before. It’s actually fun how distutils will error out with a violation of install paths if easy_install tries to bring in a non-installed package such as nose, way before the Portage sandbox triggers.

The problems started when trying to use the programs, doubly so because I don’t keep a copy of the Gentoo tree on the laptop, so I wrote the ebuilds on the headless server and then tried to run them on the actual hardware. First of all, you need to have access to the devices to be able to set them up; the libu2f-host package will install udev rules to allow the plugdev group access to the hidraw devices — but it also needed a pull request to fix them. I also added an alternative version of the rules for systemd users that does not rely on the group but rather uses the ACL support (I was surprised, I essentially suggested the same approach to replace pam_console years ago!)

Unfortunately that only works once the device is already set in U2F mode, which does not work when you’re setting up the NEO for the first time, so I originally set it up using kdesu. I have since decided that the better way is to use the udev rules I posted in my howto post.

After this, I switched off OTP, and enabled U2F and CCID interfaces on the device — and I couldn’t make it stick, the manager would keep telling me that the CCID interface was disabled, even though the USB descriptor properly called it “Yubikey NEO U2F+CCID”. It took me a while to figure out that the problem was in the app-crypt/ccid driver, and indeed the change log for the latest version points out support for specifically the U2F+CCID device.

I have updated the ebuilds afterwards, not only to depend on the right version of the CCID driver – the README for libykneomgr does tell you to install pcsc-lite but not about the CCID driver you need – but also to check for the HIDRAW kernel driver, as otherwise you won’t be able to either configure or use the U2F device for non-Google domains.

Now there is one more part of the story that needs to be told, but in a different post: getting GnuPG to work with the OpenPGP applet on the NEO-n. It was not as straightforward as it could have been and it did lead to disappointment. I’ll be a good post for next week.

Setting up Yubikey NEO and U2F on Gentoo (and Linux in general)

When the Google Online Security blog announced earlier this week the general availability of Security Key, everybody at the office was thrilled, as we’ve been waiting for the day for a while. I’ve been using this for a while already, and my hope is for it to be easy enough for my mother and my sister, as well as my friends, to start using it.

While the promise is for a hassle-free second factor authenticator, it turns out it might not be as simple as originally intended, at least on Linux, at least right now.

Let’s start with the hardware, as there are four different options of hardware that you can choose from:

  • Yubico FIDO U2F which is a simple option only supporting the U2F protocol, no configuration needed;
  • Plug-up FIDO U2F which is a cheaper alternative for the same features — I have not witnessed whether it is as sturdy as the Yubico one, so I can’t vouch for it;
  • Yubikey NEO which provides multiple interface, including OTP (not usable together with U2F), OpenPGP and NFC;
  • Yubikey NEO-n the same as above, without NFC, and in a very tiny form factor designed to be left semi-permanently in a computer or laptop.

I got the NEO, but mostly to be used with LastPass ­– the NFC support allows you to have 2FA on the phone without having to type it back from a computer – and a NEO-n to leave installed on one of my computers. I already had a NEO from work to use as well. The NEO requires configuration, so I’ll get back at it in a moment.

The U2F devices are accessible via hidraw, a driverless access protocol for USB devices, originally intended for devices such as keyboards and mice but also leveraged by UPSes. What happen though is that you need access to the device, that the Linux kernel will make by default accessible only by root, for good reasons.

To make the device accessible to you, the user actually at the keyboard of the computer, you have to use udev rules, and those are, as always, not straightforward. My personal hacky choice is to make all the Yubico devices accessible — the main reason being that I don’t know all of the compatible USB Product IDs, as some of them are not really available to buy but come from instance from developer mode devices that I may or may not end up using.

If you’re using systemd with device ACLs (in Gentoo, that would be sys-apps/systemd with acl USE flag enabled), you can do it with a file as follows:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", TAG+="uaccess"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", TAG+="uaccess"

If you’re not using systemd or ACLs, you can use the plugdev group and instead do it this way:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", GROUP="plugdev", MODE="0660"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", GROUP="plugdev", MODE="0660"

-These rules do not include support for the Plug-up because I have no idea what their VID/PID pairs are, I asked Janne who got one so I can amend this later.- Edit: added the rules for the Plug-up device. Cute their use of f1d0 as device id.

Also note that there are properly less hacky solutions to get the ownership of the devices right, but I’ll leave it to the systemd devs to figure out how to include in the default ruleset.

These rules will not only allow your user to access /dev/hidraw0 but also to the /dev/bus/usb/* devices. This is intentional: Chrome (and Chromium, the open-source version works as well) use the U2F devices in two different modes: one is through a built-in extension that works with Google assets, and it accesses the low-level device as /dev/bus/usb/*, the other is through a Chrome extension which uses /dev/hidraw* and is meant to be used by all websites. The latter is the actually standardized specification and how you’re supposed to use it right now. I don’t know if the former workflow is going to be deprecated at some point, but I wouldn’t be surprised.

For those like me who bought the NEO devices, you’ll have to enable the U2F mode — while Yubico provides the linked step-by-step guide, it was not really completely correct for me on Gentoo, but it should be less complicated now: I packaged the app-crypt/yubikey-neo-manager app, which already brings in all the necessary software, including the latest version of app-crypt/ccid required to use the CCID interface on U2F-enabled NEOs. And if you already created the udev rules file as I noted above, it’ll work without you using root privileges. Just remember that if you are interested in the OpenPGP support you’ll need the pcscd service (it should auto-start with both OpenRC and systemd anyway).

I’ll recount separately the issues with packaging the software. In the mean time make sure you keep your accounts safe, and let’s all hope that more sites will start protecting your accounts with U2F — I’ll also write a separate opinion piece on why U2F is important and why it is better than OTP, this is just meant as documentation, howto set up the U2F devices on your Linux systems.

It’s 2014, why are domains so hard?

Last year I’ve moved to Ireland and as of last December, I closed the VAT ID that made me a self-employed consultant. The reason why it took me a while to close the ID was that I still had a few contracts going on, which I needed to have expire before. One of the side effect of closing the ID is also that I had/have to deal with all the accounts where said ID was used, including the OVH account where almost all of my domains, and one of my servers, were registered — notably, xine-project.org has been registered forever at a different provider, Register4less of UserFriendly fame.

For most accounts, removing a VAT ID is easy: you update your billing information and tell them that you no longer have a VAT ID. In some rare cases, you have to forgo the account, so you just change the email address to a different one, and register a new account. In the case of OVH, things are more interesting. Especially so if you want to actually be honest.

In this case, I wanted to move my domains out of my account, in Italy, with a VAT ID, to an account in Ireland, without said ID — one of the reasons is that the address used for registration is visible in most whois output, and while I don’t care for my address being public, the address is now my mother’s only, and it bothered me having it visible there. That was a mistake (from one point of view, and a godsend from another).

First problem is that you cannot change either your account or your VAT ID status out of an OVH account, which meant I needed to transfer the domains and server to a different account. There was (and possibly is, I don’t want to go look for it) some documentation on how to transfer resources across contacts (i.e. OVH accounts), but when I tried to follow it, the website gave me just a big “Error” page, unfortunately “Error” was the whole content of the page.

Contacted for help, OVH Italia suggested using their MoM management software to handle the transition. I tried, and the results have been just as bad, but at least it did error out with an explanation, about trying to cross countries with the transfer. I then contacted OVH Ireland as well as OVH Italia, with the former, after a long discussion where they suggested me to do … exactly what I did, “assured me” that the procedure works correctly — only for OVH Italia apologizing a couple of days later that indeed a month earlier they changed the procedures because of some contractual differences between countries. They suggested using the “external transfer” – going through your standard transfer procedure for domains – but it turns out their system fails when you try that, as the domain is already in their database, so they suggest using the “internal transfer” instead (which as I said does not work).

Since a couple of my domains were going to expire soon, this past weekend I decided to start moving them out of OVH, given that the two branches couldn’t decide how to handle my case. The result is that I started loading the domains onto Gandi — among the reasons, the VideoLan people and one of my colleague know them pretty well and suggested them warmly. This proved trickier, but it also provide one thing: not all French companies are equal!

I started by moving my .eu, .es and .info domains (I own among others automake.info, which redirects to my Autotools Mythbuster — the reason is that if you type the name of the info manuals on G+, it actually brings you there! I was actually planning to make them actually point to a copy of the respective info manuals, but I’ve not studied the GFDL enough yet to know whether I can). While the .info domains are still in limbo right now, as OVH has a five-days timeout before you transfer out, and the .es domains were transferred almost immediately (the Spanish NIC is extremely efficient in that regard, they basically just send you an email to confirm you want to change the registry, and if you accept, that’s it!), the .eu were a bit of a pain.

Turns out that EURid wants a full address to assign the domain to, including a post code; unfortunately Ireland has no post code, yet and even the usual ways to represent my area of Dublin (4, 0004, D4, etc) failed; even the “classical” 123456 that is used by many Irish failed. After complaining on Twitter, a very dedicated Gandi employee, Emerick, checked it out and found that the valid value, according to EURid (but not to Gandi’s own frontend app, ironically) is “Dublin 4”. He fixed that for me on their backend, and the .eu registration went through; this blog is now proudly served by Gandi and that makes it completely IPv6 compatible.

But the trial was not completed yet. One of the reasons why I wanted to move to Gandi now, was that Register4Less was requiring me sort-of-transfer the domain from Tucows (where they resold it before) to their new dedicated registry, to keep the privacy options on. The reason for that being that Tucows started charging more, and they would have had to charge me the extra if I wanted to keep it. On the other hand, they offered to transfer it, extend the expiration another year and keep the privacy option on. I did not like the option because I just had renewed the domain the past November for a bunch of years, so I did not want to extend it even further already — and if I had to, I would at that point try to reduce the number of services I need to keep my eyes on. Besides, unlike Register4Less and OVH, Gandi supports auto-renewal of domains, which is a good thing.

Unfortunately, for ICANN or whoever else manages the .org decided that “Dublin 4” is not a valid postal code, so I had to drop it again off the account, to be able to transfer xine-project.org. Fun, isn’t it? Interestingly both the .org and .it administrators handle the lack of a post code properly — the former as N/A and the latter as the direct translation N.D.. Gandi has been warned, they will probably handle it sometime soon. In the mean time it seems like the .eu domains are not available to Irish residents, as long as they don’t want to fake an address somewhere else.

And the cherries on top, now that I’m migrating everything to Gandi? Their frontend webapp is much better at handling multiple identically-configured domains, to begin with. And as they shown already their support is terrific especially when compared to the mirror-climbing of their other French competitors. But most importantly, have you read a couple of weeks ago, the story of @N? How an attacker got a hold of GoDaddy and caused trouble for the owner of the @N twitter account? Well, turns out that Gandi people are much more security conscious than GoDaddy (okay that was easy) and not only they provide an option to disable the “reset password by email” option, but also provide 2FA through HOTP, which means it’s compatible with Google Authenticator (as well as a bunch of other software).

End of story? I’m perfectly happy to finally having a good provider for my domains, one that is safe and secure and that I can trust. And I owe Emerick a drink next time I stop by Paris! Thanks Gandi, thanks Emerick!