Passwords, password managers, and family life

Somehow, I always end up spending time writing about passwords when I even breach the subject on Twitter.

In this case, I’ve been asking around about password managers, as after many years with LastPass I want to reconsider if there is a better alternative, particularly as my needs have changed (or rather, are going to, in the not too distant future).

One of the thing that I’m looking for is a password manager that can generate diceware/xkcd-style passwords: a set of words in a certain language that are easy to say on (say) the phone, and type on systems where there is no password manager app. The reason for this is that there are a few places in which I need to be able to give the password to someone else who might not otherwise be trusted with the full password list. For instance the WiFi password for my apartment, or my mother’s house.

But it’s a bit more complicated than that. There are a number of situations where an account is not just an user. Or rather, you may want to allow h multiple users (people) to access the same account. Say for instance my energy provider’s dashboard. Or the phone provider. Or the online grocery shopping…

All of these things expect a single (billing) account, but they may rather be shared with a household than with a single individual. A few services do have a concept of a shared account, but very few do, and that makes less and less sense as the world progresses to such an everything-connected level.

I think it might be easy to figure out from the way I’ve been expressing this just above, but just to make sure not to leave “clues” rather than clear information that can be obviously be taken for public knowledge, I got to think about this because I have (finally, someone might say) found a soulmate. And while we don’t yet live together, I start to see the rough corners of these. We have not gotten to “What’s the Netflix password, again?” but I did end up changing the password to the account for Los Angeles transport card, to give her access, after setting it first with LastPass (we were visiting, and I added both of our TAP cards to the same account).

As I made clear earlier, part of this was a (minor) problem with my mother, too. But significantly less so: she never cared to have access to the power provider, phone company, and so on. Just as long as she had a copy of the invoices from time to time (which I solved by having a mailing list, which only the two of us subscribe to, as the contact address for all the services I use or used for the household in Italy).

Service providers take note: integrating with Google Drive or Dropbox so that the invoices get automatically added to a shared folder would be a lovely feature to have. And not just for households. I would love if it was easier to just have a copy of my invoices automatically added to, and indexed by, Google Drive.

But now, with a partner, it’s different. As the word implies, it’s a partnership, an equal standing. Once we will move in, we’ll share the expenses, and that means sharing the access to the accounts. Which means I don’t want to be the only one having the passwords. So I need a password manager that not only allows me to share the passwords easily, but also that allows her to use the passwords easily — which likely will translate to be able to read them off the phone, and type in a work computer’s incognito window (because she likely won’t be allowed to install the password manager on a work computer).

Which is why I’m looking for a new password manager: LastPass is actually fairly great when it comes to sharing passwords with other accounts. But it’s effectively useless when it comes to “typeable” passwords. Their “Make pronounceable” option is okay to make it easier to spell out, but I don’t want to have to use an eight-letters password to be able to type it easily, when I could just as easily use a three-words combination that is significantly stronger.

And while I could just use xkcdpass on my laptop and generate those shared passwords (which is what I did with my mother’s router), that does not really scale (it still keeps me as the gatekeeper), and it does not make the security usability for my SO. And it wouldn’t be fair to keep the password hygiene for me only.

Similarly, any solution that involves running personal infrastructure (servers, cron, git, whatever) is not an option: not only I’m increasingly not relying on it myself (I even gave up on running my own blog’s webapp!), but most of my family is not even slightly interested in figuring out how to do that. And I don’t blame the least, they have enough of their own things to care about.

If you have any suggestions for a new password manager, please do let me know. I think I may try 1Password next, if nothing else because I think Troy Hunt’s opinion is worth something, and if he backed 1Password, there has to be a reason.

The dot-EU kerfuffle — or how EURid is messing with their own best supporters

TL;DR summary: be very careful if you use a .eu domain as your point of contact for anything. If you’re thinking of registering a .eu domain to use as your primary domain, just don’t.


I have forecasted a rant when I pointed out I changed domain with my move to WordPress.

I have registered flameeyes.eu nearly ten years ago, part of the reason was because flameeyes.com was (at the time) parked to a domain squatter, and part because I have been a strong supported of the European Union.

In those ten years I started using the domain not just for my website, but as my primary contact email. It’s listed as my contact address everywhere, I have all kind of financial, commercial and personal services attached to that email. It’s effectively impossible for me to ever detangle from it, even if I spend the next four weeks doing nothing but amending registrations — some services just don’t allow you to ever change email address; many requires you to contact support and spend time talking with a person to get the email updated on the account.

And now, because I moved to the United Kingdom, which decided to leave the Union, the Commission threatens to prevent me from keeping my domain. It may sound obvious, since EURid says

A website with a .eu or .ею domain name extension tells your customers that you are a legal entity based in the EU, Iceland, Liechtenstein or Norway and are therefore, subject to EU law and other relevant trading standards.

But at the same time it now provides a terrible collapse of two worlds: technical and political. The idea that you any entity in control of a .eu domain is by requirement operating under EU law sounds good on paper… until you come to this corner case where a country leaves the Union — and now either you water down this promise, eroding trust in the domain by not upholding this law domain, or you end up with domain takeover, eroding trust in the domain on technical merit.

Most of the important details for this are already explained in a seemingly unrelated blog post by Hanno Böck: Abandoned Domain Takeover as a Web Security Risk. If EURid will forbid renewal of .eu domains for entities that are no longer considered part of the EU, a whole lot of domains will effectively be “up for grabs”. Some may currently be used as CDN aliases, and be used to load resources on other websites; those would be the worst, as they would allow the controller of the domains to inject content in other sites that should otherwise be secure.

But even more important for companies that used their .eu domain as their primary point of contact: think of any PO, or invoice, or request for information, that would be sent to a company email address — and now think of a malicious actor getting access to those communications! This is not just the risk that me (and any other European supporter who happened to live in the UK, I’m sure I’m not alone) as a single individual have — it’s a possibly unlimited amount of scams that people would be subjected to, as it would be trivial to pass for a company, once their domain is taken over!

As you can see from the title, I think this particular move is also going to hit the European supporters the most. Not just because of those individuals (like me!) who wanted to signal how they feel part of something bigger than their country of birth, but also because I expect a number of UK companies used .eu domain specifically to declare themselves open to European customers — as otherwise, between pricing in Sterling, and a .co.uk domain, it would always feel like buying “foreign goods”. Now those companies, that believed in Europe, find themselves in the weakest of positions.

Speaking of individuals, when I read the news I had a double-take, and had to check the rules for .eu domains again. At first I assumed that something was clearly wrong: I’m a European Union citizen, surely I will be able to keep my domain, no matter where I live! Unfortunately, that’s not the case:

In this first step the Registrant must verify whether it meets the General
Eligibility Criteria, whereby it must be:
(i) an undertaking having its registered office, central administration or
principal place of business within the European Union, Norway, Iceland
or Liechtenstein, or
(ii) an organisation established within the European Union, Norway, Iceland
or Liechtenstein without prejudice to the application of national law, or
(iii) a natural person resident within the European Union, Norway, Iceland or
Liechtenstein.

If you are a European Union citizen, but you don’t want your digital life to ever be held hostage by the Commission or your country’s government playing games with it, do not use a .eu domain. Simple as that. EURid does not care about the well-being of their registrants.

If you’re a European company, do think twice on whether you want to risk that a change in government for the country you’re registered in would lead you to open both yourself, your suppliers and your customers into the a wild west of overtaken domains.

Effectively, what EURid has signalled with this is that they care so little about the technical hurdles of their customers, that I would suggest against ever relying on a .eu domain for anyone at all. Register it as a defense against scammers, but don’t do business on it, as it’s less stable than certain microstate domains, or even the more trendy and modern gTLDs.

I’ll call this a self-goal. I still trust the European Union, and the Commission, to have the interests of the many in their mind. But the way they tried to apply a legislative domain to the .eu TLD was brittle at best to begin with, and now there’s no way out of here that does not ruin someone’s day, and erode the trust in that very same domain.

It’s also important to note that most of the bigger companies, those that I hear a lot of European politicians complain about, would have no problem with something like this: just create a fully-own subsidiary somewhere in Europe, say for instance Slovakia, and have it hold onto the domain. And have it just forward onto a gTLD to do business on, so you don’t even give the impression of counting on that layer of legislative trust.

Given the scary damage that would be caused by losing control over my email address of ten years, I’m honestly considering looking for a similar loophole. The cost of establishing an LLC in another country, firmly within EU boundaries, is not pocket money, but it’s still chump change compared to the amount of damage (financial, reputation, relationships, etc) that it would be a good investment.

Designing My Password Manager

So while at the (in my opinion excellent) Enigma, Tavis decided to nerd snipe me into thinking of how would I design a password manager. This because he knows (and disagree with) my present choice of using LastPass as my password manager of choice, despite the existence of a range of opensource password managers.

The main reason why I have discarded the choice of the available password managers is that, in my opinion, they all miss the main point of what a password manager should be: convenient. Password reuse is a problem because it’s more convenient than using dozens, maybe hundreds of different passwords for all the different services. An easy to use password manager is even more convenient than reusing password.

The first problem I found with effectively all of the opensource password managers I know of just use a single file-blob, and leave to you the problem of syncing it. The best I have seen was one software providing integration with Dropbox, Google Drive and ownCloud to sync the blob around, just as long as you don’t make conflicting changes on different devices. To me, the ability to make independent changes to services is actually a big requirement. This means that I would be using a file format that allows encrypting “per row”, both the keys and the values (because you don’t want to leak which accounts the user registered on, if the file is leaked). I would probably gravitate around something like the Bigtable format.

Another problem, which is present in Chrome SmartLock too, is the lack of support for what LastPass call “equivalent domains”. Due to many silly reasons, particularly if you travel a lot or if you live in different countries, you end up collecting a long list of websites using either separate domain names, or at least different TLDs. An example of this is Amazon, that use a constellation of different domains, but all share the same account management (except for Amazon Japan). A sillier example for this are Yelp and TripAdvisor, that decide to change your TLDs depending on the IP address you’re coming from, despite being the kind of services you would use particularly outside your usual country.

Admittedly, as Tavis suggested, these should be solved by the services themselves, using a single host for login/identity management. I do not expect this to happen any time soon. My previous proposal for this was defining equivalence as part as a well known configuration file (together with other improvements to password management). I now have some personal introspection questions about this, because I wonder if there is a privacy risk in sending requests to other domain to validate the reciprocal equivalence configurations. So I think I’ll leave this up for debate for later.

The next design bit is figuring out how should the password generator behave. We already have a number of good password generators of different types, including software implementations of diceware passwords (despite the site repeatedly telling you not to use computer random generators — to add to the inconvenience), and xkcdpass, that generate passwords that are easier to remember or at least to type. I think that a good password manager should allow for more than just the random-bunch-of-characters passwords that LastPass uses.

In particular, for me, I have a few websites for which I use passwords generated by xkcdpass, because I need to actually type in the password, rather than use the password manager autofill capabilities. This is the case of Sony and Nintendo accounts, that need to be typed from consoles, and of Telegram, as I need to type the password on my watch to receive messages there. Unfortunately implementing this is probably going to be an UX nightmare — one of the reason being the ability to select different wordlists. Non-English speakers are likely interested in using their own language for it. Or even the English speakers that are not afraid of other languages, and may decide to throw off a possible attacker anyway.

Ideally, the password generation settings would be stored on a domain-by-domain basis, so that if a certain website only allows numbers in its passcode, or it has a specific character limit, the same setting is used to generated a password if it’s ever breached. This may sound minor, but to me it would be so much more of a time (and frustration) saver, that it would easily become a killer feature.

But all of these idea fall to nothing without good, convenient, and trustworthy client implementations. Right now one of the many weak spots of LastPass is its Chrome extension (and Firefox too). A convenient password manager, though, ought to be able to integrate with the browser and, since it’s 2018 after all, with your phone. Unfortunately, here is where any opensource offering can’t really help as much as we would all like: it still relies (hugely) on trust. As far as I can tell, there is no way to make absolutely certain that the code of a Chrome extension on the Web Store, or of an Android app on either Play Store or F-Droid, corresponds exactly with a certain source distribution.

Don’t get me wrong, this is a real problem right now, with closed source extensions too. You need to trust the extension is not injecting malicious content in your webpage, or exfiltrating data out of your browser session. Earlier this year a widely used Chrome extension was reported as malicious, but it wasn’t until that was identified that it was removed from Chrome’s repository. At least I can have a (maybe underserved) trust in LogMeIn not to intentionally ruin their reputation by pushing actively malicious code to the Store. Would I say the same for a random single developer maintaining their widely used extension?

What this means to me is that building a good replacement for LastPass is not just a technical problem that needs to be solved by actively syncing with cloud storage services… it’s a problem of social trust, and that requires quite a bit of thought from many actors of the market: browser vendors, device manufacturers, and developers, which I’m not sure is currently happening. So I don’t hold my breath, and keep at compromises. I made mine with my threats in mind, you should make yours with what concerns you the most.

SMS for two-factors authentication

Having spent now a few days at the 34C3 (I’ll leave comments over that to discussion to happen in front of a very warm coffee with a nice croissant to the side), I have heard a few times already presenters referencing the SS7 hack as an everyday security threat. You probably are not surprised I don’t agree with that being an everyday security issue, while still thinking this is a real security issue.

Indeed, while some websites refer to the SS7 hack as the reason not to use SMS for auth, at least you can find more reasonable articles talking about the updated NIST recommendation. Myself, the preference for TOTP (as it’s used in authenticator apps), is because I don’t need to be registered on a mobile network and I can use it to login while in flights.

I want to share some more thoughts that add up to the list of reasons to not use SMS as the second factor authentication. Because while I do not believe SS7 attack is something that all users should be accounting for in their day-to-day life, it is an addition to the list of reasons why SMS auth is not a good idea at all, and I don’t want people to think that, since they are unlikely to be attacked by someone leveraging SS7, they are okay with using SMS authentication instead.

The obvious first problem is reliability: as I said in the post linked above, SMS-based authentication requires you to have access to the phone with the SIM, and that it’s connected to the network and allowed to receive the SMS. This is fine if you never travel and you have good reception where you need to use this. But myself, I have had multiple situations in which I was unable to receive SMS (in flights as I said already, or while visiting China the first time, with 3 Ireland not giving access to any roaming partners to pay-as-you-go customers, or even at my own desk at my office with 3 UK after I moved to London).

This lack of reliability is unfortunate, but not by itself a security issue: it prevents you from accessing the account that the 2FA is set on, and that would sound like it’s doing what it’s designed to do, failing close. At the same time, this increases the friction of using 2FA, reducing usability, and pushing users, bit by bit, to stop using 2FA altogether. Which is a security problem.

The other problem is the ability to intercept or hijack those messages. As you can guess by what I wrote above, I’m not referring to SS7 hacks or equivalent. It’s all significantly more simple.

The first way to intercept the SMS auth messages is having access to the phone itself. Most phones, iOS and Android alike, are configured to show new text messages in the standby page. In some cases, only a part of the message is visible, and to have access to the rest of the message you’d have to unlock the phone – assuming the phone is locked at all – but 2FA authentication messages tend to be very short and to the point, showing the number in the preview, for ease of access. On Android, such a message can also be swiped away without unlocking the phone. An user that would be victim to this type of interception might have a very hard time noticing this, as nowadays it’s likely the SMS app is not opened very often, and a swiped-off notification would take time to be noticed1.

The hijacking I have in mind is a bit more complicated and (probably) noticeable. Instead of using the SS7 hack, you can just take over a phone number by leveraging the phone providers. And this can be done in (at least) two ways: you can convince the victim’s phone provider to reprovision the number to a new SIM card within the same operator, or you can port the number to a new operator. The complexity or easiness of these two processes change a lot between countries, some countries are safer than others.

For instance, the UK system for number portability is what I expect to be the more secure (if not the most userfriendly) I have seen. The first step is to get a Portability Authorization Code (PAC) for the number you want to port. You do that by calling the current provider. None of the three providers I had up to now in the UK had any way to get this code online, which is a tad safer, as a misplaced password cannot bring full access to the account line. And while the code could be “intercepted” the same way as I pointed out above for authentication codes, the (new) operator does get in touch with you reminding you when the portability will take place, and giving you a chance to contact them if it doesn’t sound right. In the case of Vodafone, they also send you an email when you request the PAC, meaning just swiping away the notification is not enough to hide the fact that it was requested in the first place.

In Ireland, a portability request completes in the span of less than an hour, and only requires you to have (brief) access to the line you want to take over, as the new operator will send you a code to confirm the request. Which means the process, while being significantly easier for the customers, is also extremely insecure. In Italy, I actually went to the store with the line that I wanted to port, and I don’t remember if they asked anything but my IDs to open the new line. No authentication code is involved at all, so if you can fake enough documents, you likely can take over any lines. I do not remember if they notified my old SIM card before the move. I have not tried number portability in France, but it appears you can get the RIO (the equivalent transfer code) from the online system of Free at the very least.

The good thing about all the portability processes I’ve seen up to now is that at least they do not drop a new number on the old SIM card. I was told that this is (or was at least) actually common for US providers, where porting a number out just assign a new number to the old SIM. In that case, it would probably take a while for a victim to notice they had their account taken over. And that would not be a surprise.

If you’re curious, you can probably try that by yourself. Call your phone provider from another number than your own, see how many and which security questions they ask you to identify that it is actually their customer calling, instead of a random stranger. I think the funniest I’ve had was Three Ireland, that asked me for the number I “recently” sent a text message to, or a recent call made or received — you can imagine that it’s extremely easy to force you to get someone to send you a text message, or have them call you, if you’re close enough to them, or even just have them pick up the phone and reporting to the provider that you were last called by the number you used.

And then there is the other interesting point of SMS-based authentication: the codes last longer in time. A TOTP has a short lifetime by design, as it’s time based. Add some fuzzing, most of the implementations I’ve seen allow a single code to be valid for 2× the time the code is displayed on the authenticator, by accepting a code from the past, or from the future, generally at half the expected duration. Since the normal case has the code lasting for 60 seconds, they would accept a code 30 seconds before it’s supposed to be used, and 30 seconds after. But text messages can (and do) take much longer than that.

And this is particularly useful for attackers of systems that do not implement 2FA correctly. Entering the wrong OTP most of the time does not invalidate a login attempt, at least not on the first mistake, because users can mistype, or they can miss the window to send an OTP over. But sometimes there are egregious errors, such as that made by N26, where they neither ratelimited, nor invalidated the OTP requests, allowing a simple bruteforcing of a valid OTP. Since TOTP change without requesting a new code, bruteforcing those give you a particularly short time span of viability… SMS on the other hand, open for a much larger window of opportunity.

Oh and remember the Barclays single-factor-authentication? How long do you think it would take to spoof the outgoing number of the SMS that needs to be sent (with the text “Y”) to authorize the transaction, even without having access to the text message that was sent?


  1. This is an argument for using another of your messaging apps to receive text messages, whether it is Facebook Messenger, Hangouts or Signal. Assuming you use any of those on a day-to-day basis, you would then have an easy way to notice if you received messages you have not seen before.
    [return]

Barclays and the single factor authentication

In my previous post on the topic I have barely touched on one of the important reasons why I did not like Barclays at all. The reason for that was that I still had money into my account with them, and I wanted to make sure that was taken care of before lamenting further on the state of their security. As I managed to close my account now, I should go on and discuss this further, even though I have touched upon the major topics of this.

Barclays online banking system relies heavily on what I would define as “single factor authentication”.

Usually, you define authentication factors as things you have or things you know. In the case of Barclays, the only thing they effectively rely upon is “access to the debit card”. Okay, technically you could say that by itself it’s a two-factor system, as it requires access to the debit card and to its PIN. And since the EMV-CAP protocol they use for this factor executes directly on the chipcard, it is not susceptible to the usual PIN-stripping attacks as most card fraud with chip-and-pin cards uses.

But this does not count for much when the PIN of the card they issued me was 7766 — and to lament of that is why I waited to close the account and give them back the card. It seems like there’s a pattern of banks issuing “easy to remember” 4-digit PINs: XYYX, XXYY, etc. One of my previous (again, cancelled) cards had a PIN terribly easy to remember for a computerist, at least not for the average person though: 0016.

Side note: I have read someone suggesting to badly scribbled a wrong PIN on the back of a card as a theft prevention. Though I like that idea, I’m just afraid the banks won’t like it anyway. Also it would take some work to make the scribble being easily misunderstood for different digits so that they can try the three times needed to block it.

You access Barclays online banking account through the use of the Identify method provided by CAP, which means you put the card into the reader, provide the PIN, and you get an 8-digits identifier that can be used to login on the website. Since I’m no expert of how CAP works internally, I will only venture a guess that this is similar to a counter-based OTP, as the card has no access to a real-time clock, and there is no challenge provided for this information.

This account access sounds secure, but it’s really not any more secure than an username and password, at least when it comes to dealing with phishing. You may think that producing a façade that shows the full Barclays login, and proxies the responses in real time is a lot of work, but the phishing tools are known for being flexible, and they don’t really need to reproduce the whole website, just the parts they care about getting data from. The rest can easily be proxied as it is without any further change, of course.

So what can we do once you can fool someone into logging in to the bank? Well, you can’t really do much, as most of the actions require further CAP confirmation: wires, new standing orders, and so on so forth. You can, though, get a lot of information about the victim, including enough proofs of address or identity that you can really mess with their life. It also makes it possible to cancel things like standing orders to pay for rent, which would be quite messy to deal with for most people — although most of the phishing is not done for the purpose of messing with people, and more to get their money.

As I said, for sending money you need to have access to the CAP codes. That includes having access not only to the device itself, but also the card and the PIN. To execute those transactions, Barclays will ask you to sign a transaction by providing the CAP device with the account number and the amount to wire. This is good and it’s pretty hard to tamper with, hopefully (I do not make any guarantee on the implementation of CAP), so even if you’re acting through a proxy-phishing site, your wires are probably safe.

I say probably, because the way the challenge-response is implemented, only the 8-digits account number is used during the signature. If the phishers are attacking a victim that they studied for long enough, which may be the case when attacking businesses, you could know which account they pay every month manually, and set up an account with the same number at a different bank (different sort code). The signature would be valid for both.

To be fair to Barclays, implementing the CAP fully, the way they did here, is actually more secure than what Ulster Bank (and I assume the rest of RBS Group) does, with an opaque “challenge” token. While this may encode more information, the fact that it’s opaque means there is no way for the user to know whether what they are signing is indeed what they meant to.

Now, these mitigations are actually good. They require continuous access to the card on request, and that makes it very hard for phishing to just keep using the site in the background after the user logged in. But they still rely on effectively a single factor. If someone gets a hold of the card and the PIN (and we know at least some people will write the real one on the back of the card), then it’s game over: it’s like the locks on my flat’s door: two independent locks… except they use the same key. Sure, it’s a waste of time to pick both, so it increases the chances a neighbour would walk in on wannabe burglars trying to open the apartment door. But there’s a single key, I can’t just use two separate keychains to make sure a thief would only grab one of the two, and if anyone gets it from me, well, it’s game over.

Of course Barclays knows that this is not enough, so they include a risk engine. If something in the transactions don’t comply with their profile of your activity, it’s considered risky and they require an additional verification. This verification happens to be in form of text messages. I will not suggest that the problem with these is with GSM-layer attacks, as that is still not (yet) in the hands of the type of criminals aiming at personal bank accounts, but there is at the very least the risk that a thieve would get a handle of my bag with both my card and my phone, so the only “factors” that are still in my head, rather than tied to the physical objects, are the (provided) PIN of the card, and the PIN of the phone.

This profile fitting is actually the main reason why I got frustrated with Barclays: since I had just opened the account, most of the transactions were all “exceptional”, and that is extremely annoying. This was compounded by the fact that my phone provider didn’t even let me receive SMS from the office, due to lack of coverage (now fixed), and the fact that at least for wires, the Barclays UI does not warn you to check your phone!

There is also the problem with the way Barclays handle these “exceptional transactions”: debit card transactions are out-and-out rejected. The Verified by Visa screen tells you to check your phone, but the phone will only ask you if it was your transaction or not, and after you confirm it is, it’ll ask you to “retry in a couple of minutes” — retrying too quickly will lead to the transactions being blocked by the processor directly, with a temporary card lock. The wire transfer one will unblock the execution of the wire, which is good, but it can also push the wire to after the cut-off time for non-“Faster Payments” wires.

Update (2017-12-30): since I did not make this very clear, I have added a note about this at the bottom of my new post, about the fact hat confirming these transactions only need you to spoof the sender, since the content and destination of the text message to send are known (it only has to say “Y”, and it’s always to the same service number). So this validation should not really count as a second factor authentication for a skilled attacker.

These are all the reasons for which I abandoned Barclays as fast as I could. Some of those are actually decent mitigation strategies, but the fact that they do not really increase security, while increasing inconvenience, makes me doubt the validity of their concerns and threat models.

Taking over a postal address, An Post edition

As I announced a few months ago, I’m moving to London. One of the tasks before the move is setting up postal address redirection, so that the services unable to mail me across the Irish Sea can still reach me. Luckily I know for a fact that An Post (the Irish postal service) has a redirection service, if not a cheap one.

A couple of weeks ago, I went on to sign up for the services, and I found that I had two choices: I could go to the post office (which is inside the EuroSpar next door), show a photo ID and a proof of address, and pay cash or with a debit card1; or I could fill in the form online, pay with a credit card, and then post out a physical signed piece of paper. I chose the latter.

There are many laughable things that I could complain about, in the process of setting up the redirection, but I want to focus on what I think is the big, and most important problem. After you choose the addresses (original and new destination), it will ask you where you want your confirmation PIN sent.

There is a reason why they do that. I set up the redirect well before I moved, and in particular I chose to redirect mail from my apartment to my local office — this way I can either batch together the mail, or simply ask for an inter-office forwarding. This meant I had access to both the original and the new address at the same time — but for many people, particularly moving out of the country, by the time they know where to forward the mail, they might only have access to the new address.

The issue is that if you decide to get the PIN at the new address, the only notification sent to the old address is one letter, confirming the activation of the redirection, sent to the old address. This is likely meant so you can call An Post and have them cancel the redirection if that was done against your will.

While this stops a possible long-term takeover of a mail address, it still allows a wide window of opportunity for a takeover. Also, it has one significant drawback: the letter does not tell you where the mail will be redirected!

Let’s say you want to take over someone’s address (let’s look later what for). First you need to know their address; this is the simplest part of course. Now you can fill in the request on An Post’s website for the redirection — the original address is not given any indication that a request was filled – and get the PIN at the new address. Once the PIN is received, there is some time to enable the redirection.

Until activation is completed, and the redirection time is selected, no communication is given to the original address.

If your target happens to be travelling or otherwise unable to get to their mail for a few weeks, then you have an opportunity. You can take over the address, get some documents at the given address, and get your hands on them. Of course the target will become suspicious when coming back, finding a note about redirection and no mail. But finding a way to recover the mail without being tied to an identity is left as an exercise to the reader.

So what would you accomplish, beside annoying your target, and possibly get some of their unsolicited mail? Well, there are a significant amount of interesting targets in the postal mail you receive in Ireland.

For instance, take credit card statements. Tesco Bank does not allow you to receive them electronic, and Ulster Bank will send you the paper copy even though you opt-in to all the possible electronic communications. And a credit card statement in Ireland include a lot more information than other countries, including just enough to take over the credit card. Tesco Bank for instance will authenticate you with the 16 digits PAN (on the statement), your full address (on the statement), the credit limit (you guessed it, on the statement), and your date of birth (okay, this one is not on the statement, but you can probably find my date of birth pretty easily).

And even if you don’t want to take over the full credit card, having the PAN is extremely useful in and by itself, to take over other accounts. And since you have the statement, it wouldn’t be difficult to figure out what the card is used for — take over an Amazon account, you can take over a lot more things.

But there are more concrete problems too — for instance I do receive a significant amount of pseudo-cash2 in form of Tesco vouchers — having physical control of the vouchers effectively means having the cash in your hand. Or say you want to get a frequent guest or frequent flyer card, because a card is often just enough to get the benefits, and have access to the information on the account. Or just get enough of a proof of address to register on any other service that will require one.

Because let’s remember: an authentication system is just as weak as its weakest link. So all those systems requiring a proof of address? You can skip over all of them by just having one recent enough proof of address, by hijacking someone’s physical mail. And that’s just a matter of paying for it.


  1. An Post is well known for only accepting VISA Debit cards, and refuses both MasterCard Debit and VISA Credit cards. Funnily enough, they issue MasterCard cards, but that’s a story for another time.
    [return]
  2. I should at some point write a post about pseudo-cash and the value of a euro when it’s not a coin.
    [return]

Shame Cube, or how I leaked my own credit card number

This is the story of how I ended up calling my bank at 11pm on a Sunday night to ask them to cancel my credit card. But it started with a complete different problem: I thought I found a bug in some PDF library.

//platform.twitter.com/widgets.js

I asked Hanno and Ange since they both have lots more experience with PDF as a format than me (I have nearly zero), as I expected this to be complete garbage either coming from random parts of the file or memory within the process that was generating or reading it, and thought it would be completely inconsequential. As you probably have guessed by the spoiler in both the title of the post and the first paragraph, it was not the case. Instead that string is a representation of my credit card number.

After a few hours, having worked on other tasks, and having just gone back and forth with various PDFs, including finding a possibly misconfigured AGPL library in my bank’s backend (worth of another blog post), I realized that Okular does not actually show a title for this PDF, which suggested a bug in Dolphin (the Plasma file manager). In particular Poppler’s pdfinfo also didn’t show any title at all, which suggested there’s a problem with a different part of the code. Since the problem was happening with my credit card statements, and the credit card statements include the full 16-digits PAN, I didn’t want to just file a bug attaching a sample, so instead I started asking around for help to figure out which part of the code is involved.

Albert Astals Cid sent me the right direction by telling me the low-level implementation was coming from KFileMetadata, and that quickly pointed me at this interesting piece of heuristics which is designed to guess the title of a document by looking at the first page. The code is quite a bit convoluted, so I couldn’t at first just exclude uninitialized memory access, but I couldn’t figure out where it could be coming from, so I decided to copy the code into a single executable to play around with it. The good news was that it would give me the exact same answer, so it was not uninitialized memory. Instead, the parser was mis-reading something in the file, which by being stable meant it wasn’t likely a security issue, just sub-optimal code.

As there is no current, updated tool for PDF that behaves like mkvinfo, that is print an element-by-element description of the content of a PDF file, I decided to just play with the code to figure out how it decided what to use as the title. Printing out each of the possible titles being evaluated showed it was considering first my address, then part of the summary information, then this strange string. What is going on there?

The code is a bit difficult to follow, particularly for me at first since I had no idea how PDF works to begin with. But the summary of it is that it goes through the textboxes (I knew already that PDF text is laid out in boxes) of the first page, joining together the text if the box has markers to follow up. Each of these entries is stored into a map of text heights, together with a “watermark” of the biggest text size encountered during this loop. If, when looking at a textbox, the height is lower than the previous maximum height, it gets discarded. At the end, the first biggest textbox content is reported as the title.

Once I disabled the height check and always reported all the considered title textboxes, I noticed something interesting: the string that kept being reported was found together with a number of textboxes that are drawn on top of the bank giro1 credit system. The cheque includes a very big barcode… and that’s where I started sweating a bit.

The reason of the sweat is that by then I already guessed I made a huge mistake sharing the string that Dolphin was showing me. The reference to pay up a credit card is universally the full 16-digits number (PAN). Indeed the full number is printed on the cheque, and as the “An Post Ref” (An Post being the Irish postal system), and the account information (10-digits, excluding the 6-digits IIN) is printed on the bottom of the same. All of this is why I didn’t want to share the sample file, and why I always destroy the statements that arrive, in paper form, from the banks. At this point, the likeliness of the barcode containing the same information was seriously high.

My usual Barcode Scanner for Android didn’t manage to understand the barcode though, which made it awkward. Instead I decided to confirm I was actually looking at the content of the barcode in an encoded form with a very advanced PDF inspection tool: strings $file | grep Font. This did bring up a reference to /BaseFont /Code128ARedA. And that was the confirmation I needed. Indeed a quick search for that name brings you to a public domain font that implements Code 128 barcodes as a TrueType font. This is not uncommon, particularly as it’s the same method used by most label printers, including the Dymo I used to use for labelling computers.

At that point a quick comparison of the barcode I had in front of me with one generated through an online generator (but only for the IIN because I don’t want to leak it all), confirmed I was looking at my credit card number, and that my tweet just leaked it — in a bit of a strange encoding that may take some work to decode, but still leaked it. I called Ulster Bank and got the card cancelled and replaced.

Which lessons I can learn from this experience? First of all to consider credit card statements even more of a security risk than I ever imagine. It also gave me a practical instance of what Brian Krebs advocates for years regarding barcodes of boarding passes and similar. In particular it looks like both Ulster Bank and Tesco Bank use the same software to generate the credit card statements (which is easily told not to be the same system that generates the normal bank statements), which is developed by Fiserv (their name is in the Author field of the PDF), and they all rely on using the normal full card number for payment.

This is something I don’t really understand. In Italy, you only use the 16-digits number to pay the bank one-off by wire, and instead the statements never had more than the last five digits of the card. Except for the Italian American Express — but that does not surprise me too much as they manage it from London as well.

I’m now looking to see how I can improve on the guessing of the title for the PDFs in the KFileMetadata library — although I’m warming up to the idea of just sending a patch that delete that part of the code altogether, and if the file has no title, no title is displayed. The simplest solutions are, usually, the better.


  1. The Wikipedia page appears to talk only of the UK system. Ireland, as usual, appears to have kept their own version of the same system, and all the credit card statements, and most bills, will have a similar pre-printed “credit cheque” at the bottom. Even when they are direct-debited.
    [return]

A selection of good papers from USENIX Security ’17

I have briefly talked about Adrienne’s and April’s talk at USENIX Security 2017, but I have not given much light to other papers and presentations that got my attention at the conference. I thought I should do a round up of good content for this conference, and if I can manage, go back to it later.

First of all, the full proceedings are available on the Program page of the conference. As usual, USENIX open access policy means that everybody has access to these proceedings, and since we’re talking academic papers, effectively everything I’m talking about is available to the public. I know that some videos were recorded, but I’m not sure when they will be published1.

Before I go into link you to interesting content and give brief comments on them, I would like to start with a complaint about academic papers. The proper name of the conference would be 26th USENIX Security Symposium, and it’s effectively an academic conference. This means that the content is all available in form of papers. These papers are written, as usual, in LaTeX, and available in 2-columns PDFs, as it is usual. Usual, but not practical. This is a perfect format to read the paper when doing so on actual paper. But the truth is that nowadays this content is almost exclusively read in digital form.

I would love to be able to have an ePub version of the various papersto just load on an ebook reader, for instance2. But even just providing a clear HTML file would be an improvement! When reading these PDFs on a screen, you end up having to zoom in and move around a freaking lot because of the column format, and more than once that would be enough for me to stop caring and not read the paper unless I really have interest in it, and I think this is counterproductive.

Since I already wrote about Measuring HTTPS Adoption on the Web, I should not go back to that particular presentation. Right after that one, though, Katharina Krombholz presented “I Have No Idea What I’m Doing” – On the Usability of Deploying HTTPS which was definitely interesting to show how complicated still is setting up HTTPS properly, without even going into further advanced features such as HPKP, CSP and similar.

And speaking of these, an old acquaintance of mine from university time3, Stefano Calzavara, presented CCSP: Controlled Relaxation of Content Security Policies by Runtime Policy Composition (my, what a mouthful!) and I really liked the idea. Effectively the idea behind this is that CSP is too complicated to use and is turning down a significant amount of people from implementing at least the basic parts of security policies. This fits very well with the previous talk, and with my experience. This blog currently depends on a few external resources and scripts, namely Google Analytics, Amazon OneLink, and Font Awesome, and I can’t really spend the time figuring out whether I can make all the changes all the time.

In the same session as Stefano, Iskander Sanchez-Rola presented Extension Breakdown: Security Analysis of Browsers Extension Resources Control Policies, which easily sounded familiar to me, as it overlaps and extends my own complaint back in 2013 that browser extensions were becoming the next source of entropy for fingerprinting, replacing plugins. Since we had dinner with Stefano, Iskander and Igor (co-author of the paper above), we managed to have quite a chat on the topic. I’m glad to see that my hunches back in the days was not completely off and that there is more interest in fixing this kind of problems nowadays.

Another interesting area to hear from was the Understanding the Mirai Botnet that revealed one very interesting bit of information: the attack on Dyn that caused a number of outages just last year appears to have as its target not the Dyn service itself but rather Sony PlayStation Network, and should thus be looked at in the light of the previous attacks to that. This should remind to everyone that just because you get something out personally from a certain attack, you should definitely not cheer on them; you may be the next target, even just as a bystander.

Now, not all the talks were exceptional. In particular, I found See No Evil, Hear No Evil, Feel No Evil, Print No Evil? Malicious Fill Patterns Detection in Additive Manufacturing a bit… hypy. In the sense that the whole premise of considering 3D-printed sourcing as trusted by default, and then figure out a minimal amount of validation seemed to be stemming from the crowd that has been insisting that 3D printing is the future, for the past ten years or so. While it clearly is interesting, and it has a huge amount of use for prototyping, one-off designs and even cosplay, it does not seem like it got as far as people kept thinking it would. And at least from the talk and skimming the paper I couldn’t find a good explanation of how it compares against “classic” manufacturing trust.

On a similar note I found not particularly enticing the out-of-band call verification system proposed by AuthentiCall: Efficient Identitiy and Content Authentication for Phone Calls which appears to leave out all the details of identity verification and trust system. And assumes a fairly North American point of view on the communication space.

Of course I was interested in the talk about mobile payments, Picking Up My Tab: Understanding and Mitigating Synchronized Token Lifting and Spending in Mobile Payment, given my previous foray into related topics. It was indeed good, although the final answer of adding a QR-code to do a two-way verification of who it is you’re going to pay sounds like a NIH implementation of the EMV protocol. It is worth it to read to figure out the absurd implementation of Magnetic Secure Transmission that is used in Samsung Pay implementation: spoilers, it implements magnetic stripe payments through a mobile phone.

For the less academic of you, TrustBase: An Architecture to Repair and Strengthen Certificate-based Authentication appears fairly interesting, particularly as the source code is available. The idea is to move the implementation of SSL clients into an operating system service, rather than into libraries, so that it can be configured once and for all at the system level, including selecting the available cipher to use and the Authorities to trust. It sounds good, but at the same time it sounds a lot like what NSS (the Mozilla one, not the glibc one) tried to implement. Except that didn’t go anywhere, not just because of API differences.

But it can’t be an interesting post (or conference) without a bit of controversy. A Longitudinal, End-to-End View of the DNSSEC Ecosystem has been an interesting talk, and one that once again confirmed the fears around the lack of proper DNSSEC support in the wild right now. But in that very same talk, the presenter pointed out how they used a service Luminati to get access to endpoints within major ISPs networks to test their DNSSEC resolution. While I understand why a similar service would be useful in these circumstances, I need to remind people that the Luminati service is not one of the good guys!

Indeed, Luminati is described as allowing you to request access to connections following certain characteristics. What it omits to say, is that it does so by targeting connections of users who installed the Hola “VPN” tool. If you haven’t come across this, Hola is one of the many extensions that allowed users to appear as if connecting from a different country to fool Netflix and other streaming services. Beside being against terms of services (but who cares, right?), in 2015 Hola was found to be compromising its users. In particular, the users running Hola are running the equivalent of a Tor exit node, without any of the security measures to protect its users, and – because its target is non-expert users who are trying to watch content not legally available in their country – without a good understanding of what such an exit node allows.

I cannot confirm whether currently they still allow access to the full local network to the users of the “commercial” service, which include router configuration pages (cough DNS hijacking cough), and local office LANs that are usually trusted more than they should be. But it gives you quite an idea, as that was clearly the case before.

So here is my personal set of opinions and a number of pointers to good and interesting talks and papers. I just wish they would be more usable by the non-academics by not being forced only in LaTeX format, but I’m afraid the two worlds shall never meet enough.


  1. As it turns out you can blame me a little bit for this part, I promised to help out.
    [return]
  2. Thankfully, for USENIX conferences, the full proceedings are available as ePub and Mobi. Although the size is big enough that you can’t use the mail-to-Kindle feature.
    [return]
  3. All the two weeks I managed to stay in it.
    [return]

Threat models: the sushi place’s static website

At the USENIX Security Symposium 2017, Adrienne Porter Felt and April King gave a terrific presentation about HTTPS adoption and in particular showed the problems related with the long tail of websites that are not set up, or at least not set up correctly. After the talk, one of the people asking questions explicitly said that there is no point for static websites such as the one of the sushi place down the road to use HTTPS. As you can imagine, many of the people in the room (me included) disagree with this opinion drastically, and both April and Adrienne took issue with that part of the question.

At the time on Twitter, and later that day while chatting with people, I brought up the example of Comcast injecting ads on cleartext websites – a link that itself is insecure, ironically – and April also pointed out that this is extremely common in East Asia too. A friend once complained about unexpected ads when browsing on a Vodafone 4G connection, which didn’t appear on a normal WiFi connection, which is probably a very similar situation. While this is annoying, you can at least guess what these ISPs are doing is benign, or at least not explicitly malicious.

But you don’t have to be an ISP in the common sense to be able to inject into non-HTTPS websites. You can for instance have control over a free WiFi connection. It does not even have to be a completely open, unencrypted WiFi, as whoever has control of the system routing a WPA connection is also able to make changes to the data passed through that connection. That usually means either the local coffee shop, or the coffee shop’s sysadmin, MSP, or if you think you’re smart, your VPN provider.

Even more importantly, all these websites are the targets for DNS hijackers, such as the one I talked about last year. Unsecured routers where it’s not possible to get a root shell – which are then not vulnerable to worms such as Mirai – can still have their DNS settings hijacked, at which point the attacker has space to redirect the resolution of some of the hostnames.

This is even more trivial in independent coffee shops. Chains (big and small) usually sign up with a managed provider that set up various captive portals, session profiling and “growth hacks”, but smaller shops often just set up a standard router with their DSL and in many cases not even change the default passwords. And since you’re connecting from the local network, you don’t even need to figure out how to exploit it from the WAN.

It does not take a particularly sophisticated setup to check whether the intended host supports HTTPS, and if it does not, it’s trivial to change the IP and redirect to a transparent proxy that does content injection, without the need for a “proper” man in the middle of the network. DNSSEC/DANE could protect against it, but that does not seem to be something that happens right now.

These are all problems to the end users of course, rather than the problems of the Sushi restaurant, and I would not be surprised if the answer you would get from some of the small shops operator is that these problems should be solved by someone else and they should not spend time to figure it out themselves, as they don’t directly cause a problem to them. So let me paint a different picture.

Let’s say that the Sushi restaurant has unfriendly competition, that is ready to pay some of those shady DNS hijackers to particularly target the restaurant’s website to play some tricks. Of course everything you can do at this point through content injection/modification you can do by defacing a website, and that would not be stopped by encrypting the connection, but that kind of defacement is usually significantly simpler to notice, as every connection would see the defaced content, including the owner’s.

Instead, targeting a subset of connections via DNS hijacking makes it less likely that it’ll be noticed. And at that point you can make simple, subtle changes such as providing the wrong phone number (to preclude people from making reservation), changing the opening hours to something that makes it unwelcoming or even change the menu so that the prices look just high enough not to make it worth visiting. While these are only theoretical, I think any specialist who tried to do sysadmin-for-hire jobs for smaller local business has at least once heard them asking for similarly shady (or worse) tasks. And I would be surprised if nobody took these opportunities.

But there are a number of other situations in which a non-asserted content integrity can be interesting to attackers in subtle ways, even for sites that are static, not confidential, and even not controversial — I guess everybody can agree that adult entertainment websites need to be encrypted. For instance, you could undercut referral revenue by replacing the links to Amazon and other referral programs with alternative ones (or just dropping the referral code). You could technically do the same for things like AdSense, but most of those services would check where the code is embedded in and make it very easy to figure out these types of scams, the referral programs are easier to play around with.

What this means is that there are plenty of good reasons to actually spend time making sure small, long-tail websites are actually available over HTTPS. And yes, there are some sites where the loss of compatibility is a problem (say, VideoLAN, that still gets users of Windows XP). But in this case you can use conditional redirects, and only provide the non-HTTPS connection to users of very old browsers or operating systems, rather than still keeping it available to anyone else.

Free Idea: a filtering HTTP proxy for securing web applications

This post is part of a series of free ideas that I’m posting on my blog in the hope that someone with more time can implement. It’s effectively a very sketched proposal that comes with no design attached, but if you have time you would like to spend learning something new, but no idea what to do, it may be a good fit for you.

Going back to a previous topic I wrote about, and the fact that I’m trying to set up a secure WordPress instance, I would like to throw out another idea I won’t have time to implement myself any time soon.

When running complex web applications, such as WordPress, defense-in-depth is a good security practice. This means that in addition to locking down what the code itself can do on to the state of the local machine, it also makes sense to limit what it can do to the external state and the Internet at large. Indeed, even if you cannot drop a shell on a remote server, there is value (negative for the world, positive for the attacker) to at least being able to use it form DDoS (e.g. through an amplification attack).

With that in mind, if your app does not require network at all, or the network dependency can be sacrificed (like I did for Typo), just blocking the user from making outgoing connection with iptables would be enough. The --uid-owner option makes it very easy to figure out who’s trying to open new connections, and thus stop a single user transmitting unwanted traffic. Unfortunately, this does not always work because sometimes the application really needs network support. In the case of WordPress, there is a definite need to contact the WordPress servers, both to install plugins and to check if it should self-update.

You could try to limit access to what the user can access by hosts. But that’s not easy to implement right either. Take WordPress as an example still: if you wanted to limit access to the WordPress infrastructure, you would effectively have to allow it accessing *.wordpress.org, and this can’t really be done in iptables, at far as I know, since those connections go to IP literal addresses. You could rely on FcRDNS to verify the connections, but that can be slow, and if you happen to have access to poison the DNS cache of the server, you’re effectively in control of this kind of ACL. I ignored the option of just using “standard” reverse DNS resolution, because in that case you don’t even need to poison DNS, you can just decide what your IP will reverse-resolve to.

So what you need to do is actually filter at the connection-request level, which is what proxies are designed for. I’ll be assuming we want to have a non-terminating proxy (because terminating proxies are hard), but even in that case you can now know what (forward) address you want to connect to, and in that case *.wordpress.org becomes a valid ACL to use. And this is something you can actually do relatively easily with Squid, for instance. Indeed, this is the whole point of tools such as ufdbguard (which I used to maintain for Gentoo), and the ICP protocol. But Squid is particularly designed as a caching proxy, it’s not lightweight at all, and it can easily become a liability to have it in your server stack.

Up to now, what I have used to reduce the surface of attacks of my webapps is set them behind a tinyproxy, which does not really allow for per-connection ACLs. This only provides isolation against random non-proxied connections, but it’s a starting point. And here is where I want to provide a free idea for anyone who has the time and would like to provide better security tools for srver-side defense-in-depth.

A server-side proxy for this kind of security usage would have to be able to provide ACLs, with both positive and negative lists. You may want to provide all access to *.wordpress.org, but at the same time block all non-TLS-encrypted traffic, to avoid the possibility of downgrade (given that WordPress has a silent downgrade for requests to api.wordpress.org, that I talked about before).

Even better, such a proxy should have the ability to distinguish the ACLs based on which user (i.e. which webapp) is making the request. The obvious way would be to provide separate usernames to authenticate to the proxy — which again Squid can do, but it’s designed for clients for which the validation of username and password is actually important. Indeed, for this target usage, I would ignore the password altogether, and just use the user at face value, since the connection should always only be local. I would be even happier if instead of pseudo-authenticating to the proxy, the proxy could figure out which (local) user the connection came from, by inspecting the TCP socket connection, kind of like querying the ident protocol used to work for IRC.

So to summarise, what I would like to have is an HTTP(S) proxy that focuses on securing server-side web applications. Does not have to support TLS transport (because it should only accept local connections), nor it should be a terminating proxy. It should support ACLs that allow/deny access to a subset of hosts, possibly per-user, without needing a user database of any sort, and even better if it can tell by itself which user the connection came from. I’m more than happy if someone tells me this already exists, or if not, someone starts writing this… thank you!