A selection of good papers from USENIX Security ’17

I have briefly talked about Adrienne’s and April’s talk at USENIX Security 2017, but I have not given much light to other papers and presentations that got my attention at the conference. I thought I should do a round up of good content for this conference, and if I can manage, go back to it later.

First of all, the full proceedings are available on the Program page of the conference. As usual, USENIX open access policy means that everybody has access to these proceedings, and since we’re talking academic papers, effectively everything I’m talking about is available to the public. I know that some videos were recorded, but I’m not sure when they will be published1.

Before I go into link you to interesting content and give brief comments on them, I would like to start with a complaint about academic papers. The proper name of the conference would be 26th USENIX Security Symposium, and it’s effectively an academic conference. This means that the content is all available in form of papers. These papers are written, as usual, in LaTeX, and available in 2-columns PDFs, as it is usual. Usual, but not practical. This is a perfect format to read the paper when doing so on actual paper. But the truth is that nowadays this content is almost exclusively read in digital form.

I would love to be able to have an ePub version of the various papersto just load on an ebook reader, for instance2. But even just providing a clear HTML file would be an improvement! When reading these PDFs on a screen, you end up having to zoom in and move around a freaking lot because of the column format, and more than once that would be enough for me to stop caring and not read the paper unless I really have interest in it, and I think this is counterproductive.

Since I already wrote about Measuring HTTPS Adoption on the Web, I should not go back to that particular presentation. Right after that one, though, Katharina Krombholz presented “I Have No Idea What I’m Doing” – On the Usability of Deploying HTTPS which was definitely interesting to show how complicated still is setting up HTTPS properly, without even going into further advanced features such as HPKP, CSP and similar.

And speaking of these, an old acquaintance of mine from university time3, Stefano Calzavara, presented CCSP: Controlled Relaxation of Content Security Policies by Runtime Policy Composition (my, what a mouthful!) and I really liked the idea. Effectively the idea behind this is that CSP is too complicated to use and is turning down a significant amount of people from implementing at least the basic parts of security policies. This fits very well with the previous talk, and with my experience. This blog currently depends on a few external resources and scripts, namely Google Analytics, Amazon OneLink, and Font Awesome, and I can’t really spend the time figuring out whether I can make all the changes all the time.

In the same session as Stefano, Iskander Sanchez-Rola presented Extension Breakdown: Security Analysis of Browsers Extension Resources Control Policies, which easily sounded familiar to me, as it overlaps and extends my own complaint back in 2013 that browser extensions were becoming the next source of entropy for fingerprinting, replacing plugins. Since we had dinner with Stefano, Iskander and Igor (co-author of the paper above), we managed to have quite a chat on the topic. I’m glad to see that my hunches back in the days was not completely off and that there is more interest in fixing this kind of problems nowadays.

Another interesting area to hear from was the Understanding the Mirai Botnet that revealed one very interesting bit of information: the attack on Dyn that caused a number of outages just last year appears to have as its target not the Dyn service itself but rather Sony PlayStation Network, and should thus be looked at in the light of the previous attacks to that. This should remind to everyone that just because you get something out personally from a certain attack, you should definitely not cheer on them; you may be the next target, even just as a bystander.

Now, not all the talks were exceptional. In particular, I found See No Evil, Hear No Evil, Feel No Evil, Print No Evil? Malicious Fill Patterns Detection in Additive Manufacturing a bit… hypy. In the sense that the whole premise of considering 3D-printed sourcing as trusted by default, and then figure out a minimal amount of validation seemed to be stemming from the crowd that has been insisting that 3D printing is the future, for the past ten years or so. While it clearly is interesting, and it has a huge amount of use for prototyping, one-off designs and even cosplay, it does not seem like it got as far as people kept thinking it would. And at least from the talk and skimming the paper I couldn’t find a good explanation of how it compares against “classic” manufacturing trust.

On a similar note I found not particularly enticing the out-of-band call verification system proposed by AuthentiCall: Efficient Identitiy and Content Authentication for Phone Calls which appears to leave out all the details of identity verification and trust system. And assumes a fairly North American point of view on the communication space.

Of course I was interested in the talk about mobile payments, Picking Up My Tab: Understanding and Mitigating Synchronized Token Lifting and Spending in Mobile Payment, given my previous foray into related topics. It was indeed good, although the final answer of adding a QR-code to do a two-way verification of who it is you’re going to pay sounds like a NIH implementation of the EMV protocol. It is worth it to read to figure out the absurd implementation of Magnetic Secure Transmission that is used in Samsung Pay implementation: spoilers, it implements magnetic stripe payments through a mobile phone.

For the less academic of you, TrustBase: An Architecture to Repair and Strengthen Certificate-based Authentication appears fairly interesting, particularly as the source code is available. The idea is to move the implementation of SSL clients into an operating system service, rather than into libraries, so that it can be configured once and for all at the system level, including selecting the available cipher to use and the Authorities to trust. It sounds good, but at the same time it sounds a lot like what NSS (the Mozilla one, not the glibc one) tried to implement. Except that didn’t go anywhere, not just because of API differences.

But it can’t be an interesting post (or conference) without a bit of controversy. A Longitudinal, End-to-End View of the DNSSEC Ecosystem has been an interesting talk, and one that once again confirmed the fears around the lack of proper DNSSEC support in the wild right now. But in that very same talk, the presenter pointed out how they used a service Luminati to get access to endpoints within major ISPs networks to test their DNSSEC resolution. While I understand why a similar service would be useful in these circumstances, I need to remind people that the Luminati service is not one of the good guys!

Indeed, Luminati is described as allowing you to request access to connections following certain characteristics. What it omits to say, is that it does so by targeting connections of users who installed the Hola “VPN” tool. If you haven’t come across this, Hola is one of the many extensions that allowed users to appear as if connecting from a different country to fool Netflix and other streaming services. Beside being against terms of services (but who cares, right?), in 2015 Hola was found to be compromising its users. In particular, the users running Hola are running the equivalent of a Tor exit node, without any of the security measures to protect its users, and – because its target is non-expert users who are trying to watch content not legally available in their country – without a good understanding of what such an exit node allows.

I cannot confirm whether currently they still allow access to the full local network to the users of the “commercial” service, which include router configuration pages (cough DNS hijacking cough), and local office LANs that are usually trusted more than they should be. But it gives you quite an idea, as that was clearly the case before.

So here is my personal set of opinions and a number of pointers to good and interesting talks and papers. I just wish they would be more usable by the non-academics by not being forced only in LaTeX format, but I’m afraid the two worlds shall never meet enough.


  1. As it turns out you can blame me a little bit for this part, I promised to help out.
    [return]
  2. Thankfully, for USENIX conferences, the full proceedings are available as ePub and Mobi. Although the size is big enough that you can’t use the mail-to-Kindle feature.
    [return]
  3. All the two weeks I managed to stay in it.
    [return]

EFF’s Panopticlick at Enigma 2016

One of the thing I was the most interested to hear about, at Enigma 2016, was news about EFF’s Panopticlick. For context, here is the talk from Bill Burlington:

I wrote before about the tool, but they have recently reworked and rebranded it to use it as a platform for promoting their Privacy Badger, which I don’t particularly care for. For my intents, they luckily still provide the detailed information, and this time around they make it more prominent that they rely on the fingerprintjs2 library for this information. Which means I could actually try and extend it.

I tried to bring up one of my concerns at the post-talk Q&A at the conference (the Q&A were not recorded), so I thought it wold be nice to publish my few comments about the tool as it is right now.

The first comment is this: both Panopticlick and Privacy Badger do not consider the idea of server-side tracking. I have said that before, and I will repeat it now: there are plenty of ways to identify a particular user, even across sites, just by tracking behaviour that are seen passively on the server side. Bill Budington’s answer to this at the conference was that Privacy Badger’s answer is allowing cookies only if if there is a policy in place from the site, and count on this policy being binding for the site.

But this does not mean much — Privacy Badger may stop the server from setting a cookie, but there are plenty of behaviours that can be observed without the help of the browser, or even more interestingly, with the help of Privacy Badger, uBlock, and similar other “privacy conscious” extensions.

Indeed, not allowing cookies is, already, a piece of trackable information. And that’s where the problem with self-selection, which I already hinted at before, comes to: when I ran Panopticlick on my laptop earlier it told me that one out of 1.42 browsers have cookies enabled. While I don’t have any access to facts and statistics about that, I do not think it’s a realistic number to say that about 30% of browsers have cookies disabled.

If you connect this to the commentaries on NSA’s Rob Joyce said at the closing talk, which unfortunately I was not present for, you could say that the fact that Privacy Badger is installed, and fetches a given path from a server trying to set a cookie, is a good way to figure out information on a person, too.

The other problem is more interesting. In the talk, Budington introduces briefly the concept of Shannon Entropy, although not by that name, and gives an example on different amount of entropy provided by knowing someone’s zodiac sign versus knowing their birthday. He also points out that these two information are not independent so you cannot sum their entropy together, which is indeed correct. But there are two problems with that.

The first, is that the Panopticlick interface does seem to think that all the information it gathers is at least partially independent and indeed shows a number of entropy bits higher than the single highest entry they have. But it is definitely not the case that all entries are independent. Even leaving aside browser specific things such as the type of images requested and so on, for many languages (though not English) there is a timezone correlation: the vast majority of Italian users would be reporting the same timezone, either +1 or +2 depending on the time of the year; sure there are expats and geeks, but they are definitely not as common.

The second problem is that there is a more interesting approach to take, when you are submitted key/value pair of information that should not be independent, in independent ways. Going back to the example of date of birth and zodiac sign, the calculation of entropy in this example is done starting from facts, particularly those in which people cannot lie — I’m sure that for any one database of registered users, January 1st is skewed as having many more than than 1/365th of the users.

But what happens if the information is gathered separately? If you ask an user both their zodiac sign and their date of birth separately, they may lie. And when (not if) they do, you may have a more interesting piece of information. Because if you have a network of separate social sites/databases, in which only one user ever selects being born on February 18th but being a Scorpio, you have a very strong signal that it might be the same user across them.

This is the same situation I described some time ago of people changing their User-Agent string to try to hide, but then creating unique (or nearly unique) signatures of their passage.

Also, while Panopticlick will tell you if the browser is doing anything to avoid fingerprinting (how?) it still does not seem to tell you if any of your extensions are making you more unique. And since it’s hard to tell whether some JavaScript bit is trying to load a higher-definition picture, or hide pieces of the UI for your small screen, versus telling the server about your browser setup, it is not like they care if you disabled your cookies…

For a more proactive approach to improve users’ privacy, we should ask for more browser vendors to do what Mozilla did six years ago and sanitize what their User-Agent content should be. Currently, Android mobile browsers would report both the device type and build number, which makes them much easier to track, even though the suggestion has been, up to now, to use mobile browsers because they look more like each other.

And we should start wondering how much a given browser extension adds or subtract from the uniqueness of a session. Because I think most of them are currently adding to the entropy, even those that are designed to “improve privacy.”

Inspecting and knowing your firmware images

Update: for context, here’s the talk I was watching while writing this post.

Again posting about the Enigma conference. Teddy Reed talked about firmware security, in particular based on pre-boot EFI services. The video will be available at some point, it talks in details about osquery (which I’d like to package for Gentoo), but also has a lower-key announcement of something I found very interesting: VirusTotal is now (mostly) capable of scanning firmware images of various motherboard manufacturers.

The core of this implementation leverages two open-source tools: uefi_firmware by Teddy himself, and UEFITool by Nikolaj Schlej. They are pretty good but since this is still in the early stages, there are still a few things to iron out.

For instance, when I first scanned the firmware of my home PC it was reported with a clearly marker of malware, which made me suspicious – and indeed got ASUS to take notice and look into it themselves – but it looks like it was a problem with parsing the file, Teddy’s looking into it.

On the other hand, sticking with ASUS, my ZenBook shows in its report the presence of CompuTrace — luckily for me I don’t run this on Windows.

This tool is very interesting under many different point of views, because not only it will (maybe in due time, as firmware behaviour analysis improves) provide information about possibly-known malware (such as CompuTrace) in a firmware upgrade, before you apply it, but even before you even buy the computer.

And this is not just about malware. The information that VirusTotal provides (or to be precise the tools behind it) include information about certificates, which for instance told me that my home PC would allow me to install Ubuntu under SecureBoot, since the Canonical certificate is present — or, according to Matthew Garrett, it will allow an Ubuntu signed bootloaded to boot just about anything defeating SecureBoot altogether.

Unfortunately this only works for manufacturers that provide raw firmware updates right now. ASUS and Intel both do that, but for instance Dell devices will provide the firmware upgrade only as a Windows (or DOS) executable. Some old extraction instructions exist, but they are out of date. Thankfully, Nikolaj pointed me at a current script that works at least for my E6510 laptop — which by the way also has CompuTrace.

That script, though, fails with my other Dell laptop, a Vostro 3750 — in that case, you can get your hands on the BIOS image by simply executing it with Wine (it will fail with an obscure error message) and then fetching it from Wine’s temporary folder. Similarly, it does not work with the updater for the XPS 13 (which I’m considering buying to replace the Zenbook), and in this case Wine is not of enough help (because it does not

Unfortunately that script does not work with the more recent laptops such as the XPS13 that I’m considering buying, so I should possibly look into extending it if I can manage to get it work, although Nikolaj with much more experience than me tried and failed to get a valid image out of it.

To complete the post, I would like to thank Teddy for pointing the audience to Firmware Security — I know I’ll be reading a lot more about that soon!

Usable security: the sudo security model

Edit: here’s the talk I was watching while writing this post, for context.

I’m starting writing this while I’m at Enigma 2016 listening to the usable security track. I think it’s a perfectly good time to start talk publicly about my experience trying to bring security to a Unix-based company I worked for before.

This is not a story of large company, the company I worked for was fairly small, with five people working in it at the time I was there. I’ll use “we” but I will point out that I’m no longer at that company and this is all in the past. I hope and expect the company to have improved their practices. When I joined the company, it was working on a new product, which meant we had a number of test servers running within the office and only one real “production” server for this running in a datacenter. In addition to the new product, a number of servers for a previous product were in production, and a couple of local testing servers for these.

While there was no gaping security hole for the company (otherwise I wouldn’t even be talking about it!) the security hygiene in the company was abysmal. We had an effective sysadmin at the office for the production server, and an external consultant to manage the network, but the root password (yes singular) of all devices was also known to the owner of the company, who also complained when I told them I wouldn’t share my password.

One of the few things that I wanted to set up there was a stronger authentication and stopping people from accessing everything with root privileges. For that stepping stone I ended up using, at least for the test servers (I never managed to put this into proper production), sudo.

We have all laughed at sudo make me a sandwich but the truth is that it’s still a better security mode than running as root, if used correctly. In particular, I did ask the boss what they wanted to run as root, and after getting rid of the need for root for a few actions that could be done unprivileged, I set up a whitelist of commands that their user could run without password. They were mostly happy not to have to login as root, but it was still not enough for me.

My follow-up ties to the start of this article, in particular the fact I started writing this while listening to Jon Oberheide. What I wanted to achieve was having an effective request for privilege escalation to root — that is, if someone were to actually find the post-it with the owner’s password they wouldn’t get access to root on any production system, even though they may be able to execute some (safe) routine tasks. At the time, my plan involved using Duo Security and a customized duo_unix so that a sudo request for any non-whitelisted command (including sudo -i) would require confirmation to the owner’s phone. Unfortunately at the time this hit two obstacles: the pull request with the code to handle PAM authentication for sudo was originally rejected (I’m not sure what the current state of that is, maybe it can be salvaged if it’s still not supported) and the owners didn’t want to pay for the Duo license – even just for the five of us, let alone providing it as a service to customers – even though my demo did have them quite happy about the idea of only ever needing their own password (or ssh key, but let’s not go there for now.)

This is just one of many things that were wrong in that company of course, but I think it shows a little bit that even in the system administration work, sometimes security and usability do go hand in hand, and a usable solution can make even a small company more secure.

And for those wondering, no I’m in no way affiliate with Duo, I just find it a good technology and I’m glad Dug pointed me at it a while back.

On the conference circuit

You may remember that I used not to be a fan of travel, and that for a while I was absolutely scared by the idea of flying. This has clearly not been the case in a while, given that I’ve been working for US companies and traveling a lot of the time.

One of the side effects of this is that I enjoy the “conference circuit”, to the point that I’m currently visiting three to four conferences a year, some of which for VideoLAN and others for work, and in a few cases for nothing in particular. This is an interesting way to keep in touch with what’s going on in the community and in the corporate world out there.

Sometimes, though, I wish I had more energy and skills to push through my ideas. I find it curious how nowadays it’s all about Docker and containers, while I jumped on the LXC bandwagon quite some time ago thanks to Tiziano, and because of that need I made Gentoo a very container-friendly distribution from early on. Similarly, O’Reilly now has a booklet on static site generators which describe things not too far from what I’ve been doing since at least 2006 for my website, and for xine’s later on. Maybe if I wasn’t at the time so afraid of traveling I would have had more impact on this, but I guess (to use a flying metaphor) I lost my slot there.

To focus bit more on SCaLE14x in particular, and especially about Cory Doctorow’s opening keynote, I have to say tht the conference is again a good load of fun. Admittedly I rarely manage to go listening to talks, but the amount of people going in and out of the expo floor, and the random conversation struck there are always useful.

In the case of Doctorow’s keynote, while he’s (as many) a bit too convinced, in my opinion, that he has most if not all the answers, his final argument was a positive one: don’t try to be “pure” (as FSF would like you to be), instead hedge your bets by contributing (time, energy, money) to organizations and projects that work towards increasing your freedom. I’ve been pleasantly surprised to hear Cory name, earlier in that talk, VLC and Handbrake — although part of the cotnext in which he namechecked us is likely going to be a topic for a different post, once I have something figured out.

My current trip brings me to San Francisco tonight, for Enigma 2016, and on this note I would like to remember to conferencegoers that, while most of us are aiming for a friendly and relaxed atmosphere, there is some opsec you should be looking into. I don’t have a designated conference laptop (just yet, I might get myself a Chromebook for it) but I do have at least a privacy screen. I’ve seen more than a couple corp email interfaces running on laptops while walking the expo floor this time.

Finally, I need to thank TweetDeck for their webapp. The ability to monitor hashtags, and particularly multiple hashtags from the same view is gorgeous when you’re doing back-to-back conferences (#scale14x, #enigma2016, #fosdem.) I know at least one of them is reading, so, thanks!

Conferencing

This past weekend I had the honor of hosting the VideoLAN Dev Days 2014 in Dublin, in the headquarters of my employer. This is the first time I organize a conference (or rather help organize it, Audrey and our staff did most of the heavy lifting), and I made a number of mistakes, but I think I can learn from them and be better the next time I’ll try something like this.

_MG_8424.jpg
Photo credit: me

Organizing an event in Dublin has some interesting and not-obvious drawbacks, one of which is the need for a proper visa for people who reside in Europe but are not EEA citizens, thanks to the fact that Ireland is not part of Schengen. I was expecting at least UK residents not to need any scrutiny, but Derek proved me wrong as he had to get an (easy) visa at entrance.

Getting just shy of a hundred people in a city like Dublin, which is by far not a metropolis like Paris or London would be is an interesting exercise, yes we had the space for the conference itself, but finding hotels and restaurants for the amount of people became tricky. A very positive shout out is due to Yamamori Sushi that hosted the whole of us without a fixed menu and without a hitch.

As usual, meeting in person with the people you work with in open source is a perfect way to improve collaboration — knowing how people behave face to face makes it easier to understand their behaviour online, which is especially useful if the attitudes can be a bit grating online. And given that many people, including me, are known as proponent of Troll-Driven Development – or Rant-Driven Development given that people like Anon, redditors and 4channers have given an even worse connotation to Troll – it’s really a requirement, if you are really interested to be part of the community.

This time around, I was even able to stop myself from gathering too much swag! I decided not to pick up a hoodie, and leave it to people who would actually use it, although I did pick up a Gandi VLC shirt. I hope I’ll be able to do that at LISA as I’m bound there too, and last year I came back with way too many shirts and other swag.