Thoughts and Concerns about Operation Onymous

What happened

Recently it was announced that a coalition of government agencies took control of many Tor hidden services. We were as surprised as most of you. Unfortunately, we have very little information about how this was accomplished, but we do have some thoughts which we want to share.

Over the last few days, we received and read reports saying that several Tor relays were seized by government officials. We do not know why the systems were seized, nor do we know anything about the methods of investigation which were used. Specifically, there are reports that three systems of Torservers.net disappeared and there is another report by an independent relay operator. If anyone has more details, please get in contact with us. If your relay was seized, please also tell us its identity so that we can request that the directory authorities reject it from the network.

But, more to the point, the recent publications call the targeted hidden services seizures "Operation Onymous" and they say it was coordinated by Europol and other government entities. Early reports say 17 people were arrested, and 400 hidden services were seized. Later reports have clarified that it was hundreds of URLs hosted on roughly 27 web sites offering hidden services. We have not been contacted directly or indirectly by Europol nor any other agency involved.

Tor is most interested in understanding how these services were located, and if this indicates a security weakness in Tor hidden services that could be exploited by criminals or secret police repressing dissents. We are also interested in learning why the authorities seized Tor relays even though their operation was targetting hidden services. Were these two events related?

How did they locate the hidden services?

So we are left asking "How did they locate the hidden services?". We don't know. In liberal democracies, we should expect that when the time comes to prosecute some of the seventeen people who have been arrested, the police would have to explain to the judge how the suspects came to be suspects, and that as a side benefit of the operation of justice, Tor could learn if there are security flaws in hidden services or other critical internet-facing services. We know through recent leaks that the US DEA and others have constructed a system of organized and sanctioned perjury which they refer to as "parallel construction."

Unfortunately, the authorities did not specify how they managed to locate the hidden services. Here are some plausible scenarios:

Operational Security

The first and most obvious explanation is that the operators of these hidden services failed to use adequate operational security. For example, there are reports of one of the websites being infiltrated by undercover agents and the affidavit states various operational security errors.

SQL injections

Another explanation is exploitation of common web bugs like SQL injections or RFIs (remote file inclusions). Many of those websites were likely quickly-coded e-shops with a big attack surface. Exploitable bugs in web applications are a common problem.

Bitcoin deanonymization

Ivan Pustogarov et al. have recently been conducting interesting research on Bitcoin anonymity.

Apparently, there are ways to link transactions and deanonymize Bitcoin clients even if they use Tor. Maybe the seized hidden services were running Bitcoin clients themselves and were victims of similar attacks.

Attacks on the Tor network

The number of takedowns and the fact that Tor relays were seized could also mean that the Tor network was attacked to reveal the location of those hidden services. We received some interesting information from an operator of a now-seized hidden service which may indicate this, as well. Over the past few years, researchers have discovered various attacks on the Tor network. We've implemented some defenses against these attacks, but these defenses do not solve all known issues and there may even be attacks unknown to us.

For example, some months ago, someone was launching non-targetted deanonymization attacks on the live Tor network. People suspect that those attacks were carried out by CERT researchers. While the bug was fixed and the fix quickly deployed in the network, it's possible that as part of their attack, they managed to deanonymize some of those hidden services.

Another possible Tor attack vector could be the Guard Discovery attack. This attack doesn't reveal the identity of the hidden service, but allows an attacker to discover the guard node of a specific hidden service. The guard node is the only node in the whole network that knows the actual IP address of the hidden service. Hence, if the attacker then manages to compromise the guard node or somehow obtain access to it, she can launch a traffic confirmation attack to learn the identity of the hidden service. We've been
discussing various solutions to the guard discovery attack for the past many months but it's not an easy problem to fix properly. Help and feedback on the proposed designs is appreciated.

*Similarly, there exists the attack where the hidden service selects the attacker's relay as its guard node. This may happen randomly or this could occur if the hidden service selects another relay as its guard and the attacker renders that node unusable, by a denial of service attack or similar. The hidden service will then be forced to select a new guard. Eventually, the hidden service will select the attacker.

Furthermore, denial of service attacks on relays or clients in the Tor network can often be leveraged into full de-anonymization attacks. These techniques go back many years, in research such as "From a Trickle to a Flood", "Denial of Service or Denial of Security?", "Why I'm not an Entropist", and even the more recent Bitcoin attacks above. In the Hidden Service protocol there are more vectors for DoS attacks, such as the set of HSDirs and the Introduction Points of a Hidden Service.

Finally, remote code execution exploits against Tor software are also always a possibility, but we have zero evidence that such exploits exist. Although the Tor source code gets continuously reviewed by our security-minded developers and community members, we would like more focused auditing by experienced bug hunters. Public-interest initiatives like Project Zero could help out a lot here. Funding to launch a bug bounty program of our own could also bring real benefit to our codebase. If you can help, please get in touch.

Advice to concerned hidden service operators

As you can see, we still don't know what happened, and it's hard to give concrete suggestions blindly.

If you are a concerned hidden service operator, we suggest you read the cited resources to get a better understanding of the security that hidden services can offer and of the limitations of the current system. When it comes to anonymity, it's clear that the tighter your threat model is, the more informed you need to be about the technologies you use.

If your hidden service lacks sufficient processor, memory, or network resources the DoS based de-anonymization attacks may be easy to leverage against your service. Be sure to review the Tor performance tuning guide to optimize your relay or client.

*Another possible suggestion we can provide is manually selecting the guard node of a hidden service. By configuring the EntryNodes option in Tor's configuration file you can select a relay in the Tor network you trust. Keep in mind, however, that a determined attacker will still be able to determine this relay is your guard and all other attacks still apply.

Final words

The task of hiding the location of low-latency web services is a very hard problem and we still don't know how to do it correctly. It seems that there are various issues that none of the current anonymous publishing designs have really solved.

In a way, it's even surprising that hidden services have survived so far. The attention they have received is minimal compared to their social value and compared to the size and determination of their adversaries.

It would be great if there were more people reviewing our designs and code. For example, we would really appreciate feedback on the upcoming hidden service revamp or help with the research on guard discovery attacks (see links above).

Also, it's important to note that Tor currently doesn't have funding for improving the security of hidden services. If you are interested in funding hidden services research and development, please get in touch with us. We hope to find time to organize a crowdfunding campaign to acquire independent and focused hidden service funding.

Finally, if you are a relay operator and your server was recently compromised or you lost control of it, please let us know by sending an email to bad-relays@lists.torproject.org.

Thanks to Griffin, Matt, Adam, Roger, David, George, Karen, and Jake for contributions to this post.

Updates:
* Added information about guard node DoS and EntryNodes option - 2014/11/09 18:16 UTC

Anonymous

November 09, 2014

Permalink

Layperson here, but responding to the last sentence in the second paragraph "If your relay was seized...":

Is there any way to write a canary that would reside in a relay's software and squawk if the relay was seized? Somewhat similar to heartbeat functions that indicate all's well until it isn't? This could help identify and exclude seized servers from the network.

The canary could be disabled by the person who seized the relay. Really all they need to do is grab the nickname and identity key and then set it up somewhere else.

That said, for one example of a step we could take, see:
https://bugs.torproject.org/13705

Ultimately, we need more people watching the Tor network for anomalies, e.g.
https://gitweb.torproject.org/doctor.git

Just going to throw my 2 cents down here...

Maybe a sysadmin should be required to enter a passphrase upon starting up a Tor node to prove the node is in the correct hands.
I've not heard of servers being seized without any kind of rebooting happening in order to gain access to the contents of said server.
At least then we won't have any exit servers seized without some kind of alert being set off.

Or maybe we should just look at the possibility of multiple servers sharing the same onion url, and working on an anycast/multicast type scenario, so that if DoS attacks happen, the traffic is equally split amongst datacenters, making things harder to locate.

Anonymous

November 09, 2014

Permalink

How would a SQL injection or an RFI bug lead to deanonymization/location of a hidden service?

It is certainly possible to run a hidden service on hardware that is unable to connect to the internet without going through Tor (eg, tor is running on another machine), but probably very few people go to the trouble of doing this.

So, usually, code execution on a hidden service web app means you can locate the server.

Add to this that most hidden service operators seems to be running their operation on a chroot jail/VPS.

While layered hardware routers would be the ideal scenario, it seems that most of these sites do not even bother a decent stopgap measure of having a dedicated machine running something like ESXi to create a virtual layered approach.

Anonymous

November 09, 2014

Permalink

The guy in Eastern Europe who was raided at the behest of the FBI or HSI references the 1st Amendment - how does that apply to him? Does his country have a 1st Amendment?

Yeah, that is kind of a weird phrase for him to throw in.

Alas, even though some of the attackers here are FBI, if he's not an American then they don't have to give him any rights. It's crummy but we've seen it again and again recently. :(

I'm in Eastern Europe and I know that my country extradites people who use the internet to break American laws and these people are tried in American courts but I don't know whether the American 1st Amendment applies.

Any person tried in the US under US law has, at least in theory, all the protections of that law, including 1st Amendment constitutional protection. In practice, however, having such rights violated by the courts happens -- and one cannot always afford to seek the judicial reviews needed.

I imagine this is particularly challenging for people who have been extradited here, as they typically also have no access to funds and and are not native English speakers, both of which can act as barriers to effective justice in the US.

Anonymous

November 09, 2014

Permalink

It's also possible that TOR developers are working with LE, and have injected some hard-to-detect loophole in a convoluted bit of source code. It might be disguised so well that experienced bug hunters would skim right over it and not realize what they were looking at.

Why would TOR devs work with LE? I think in an attempt to help get rid of drug sites/illegal porn sites, they may feel it gives "legitimacy" to TOR, and in turn, may increase donations/funding.

The big problem with that line of thinking, however, is those same "loopholes" used by LE to shut down "illegal" sites will also be used to catch the next Edward Snowden, or crack down on people criticising their government. At this point, I think the evidence points to TOR being compromised from the inside, and TOR's credibility is suspect until/unless we learn otherwise.

Intentionally degrading Tor's anonymity would be stupid. It's already weak enough compared to the very real adversaries that some of our users are facing (even though at the same time it is the best available system).

Putting a backdoor in Tor, which could then be exploited by other people too, would be a really bad idea. We haven't done it and we won't do it.

For more reading and videos, see:
https://www.torproject.org/docs/faq#Backdoor
https://svn.torproject.org/svn/projects/articles/circumvention-features…
https://blog.torproject.org/blog/trip-report-october-fbi-conference
https://media.torproject.org/video/30C3_-_5423_-_en_-_saal_1_-_20131227…

Wow, what a bunch of big accusations against those that stand between a complete surveillance state vs a somewhat half ass broken private internet.

JAP did that, and it destroyed them. Tor would have to be run by idiots to think that was a good idea, and I don't believe it is.

I don't think so! If they would, why are some of the worst sites in onionland still online? For me it seems that LE was catching what they could and not what they wish they could!

I very much doubt TOR developers AS A GROUP would do such a thing.

However, a single TOR developer might go "off the reservation" and put his own desires to help rid the world of illegal-drug/illegal-porn sites ahead of those of his fellow developers, the TOR community, those who depend on TOR for purposes even he would presumably think are legitimate. As others have already implied, such a person is only fooling himself if he thinks doing this won't hurt all TOR users.

There is also the remote possibility of a double-agent programmer who only pretends to be working towards the project's goals but is really trying to insert code to serve the needs of his true master, probably a government agency but possibly a non-governmental actor that would benefit from being able to break TOR's security.

A good albeit incomplete defense to such "lone wolf Benedict Arnold coders" is to have all code peer-reviewed before it is committed and to have periodic code audits so every line of code is reviewed every year or two by someone who has neither touched nor reviewed that section in the last few years.

TOR is an acronym so it is TOR. If someone wants to call it Tor that is fine but the equal same is true..

Also, I laugh at everyone here. You cannot hide from low level TCP/IP attack when the transmission medium is compromised. If a packet can go from point A to B it can be tracked.

Anonymous

December 15, 2014

In reply to by arma

Permalink

Oh that is just great! I am an English major and so I am used to capitalizing all acronyms. now I have to get used to using "Tor", even though I know it is an acronym!

Anonymous

November 09, 2014

Permalink

The scope of the recent seizures makes it clear that TOR has been compromised, and LE has found a way to strip anonymity. It obviously wasn't just one or two people who made mistakes on their security--this was a world-wide coordinated effort with dozens of sites. I also find it hard to believe that the TOR devs were not aware of the exploit LE has been able to use. I suspect they thought TOR would gain "legitimacy" if they allowed LE to crack down on drug sites et al. The problem is LE got greedy, and did this mass infiltration. If it had only been one or two sites, we could have chalked it up to a couple people making security mistakes. Until we find out exactly how they did it, we should assume TOR has been compromised, and assume "they" know who you are when using it.

He's an idiot that doesn't realize the entire internet is the conception of the United States military. If he's so sure Tor is a honeypot why even come here to post? Its the same story every time, some CS1 level student thinks he's a cypherpunk because he knows about the silk road and has all the answers to questions plaguing humanity. He should focus on his schoolwork.

I don't think the number of busts tells us anything about whether or not Tor has been compromised. If LE had *not* broken Tor and were just doing a regular bust I think you would expect to see them do it like it was done here. Wait until you have gathered enough evidence through whatever channels you can and take down as many targets as possible in a very short space of time. If they had only taken down one or two, others would rush to secure themselves or might go offline entirely. LE wants to look big and strong, more busts is more intimidating.

Anonymous

November 09, 2014

Permalink

Of course putting a "backdoor" into the code would be a bad idea, but then again, sometimes the best hiding places are right in front of you. I think the TOR source code needs to be picked through with a fine toothed comb. And who knows what the TOR devs might kowtow to if the FBI showed up at their door, or if they were handed an envelope full of money. Trust nobody--especially the people who say, "you can trust us".

Yes please! Please audit the code. There's a great guy in Russia (we think -- so far he's remained anonymous) who has been finding and reporting bugs in the code over the past years. We need more people doing more of that auditing.

And this is especially the case when you consider the broader ecosystem of software that's involved here -- whether it's apache and nginx, or Firefox, or Tor, or the Linux kernel, etc.

But it's actually worse than this, because even if you do audit the code, you'll only be checking for whether we do things the way we said we did. Popping up a layer on the stack, the other question needs to be "should you be doing them that way, or is there a safer way?" That's what all the research and design work is about, and why we work so closely with the academic anonymous communications research community.

See https://svn.torproject.org/svn/projects/articles/circumvention-features… for some discussion about all the layers that need evaluation and analysis, beyond just source code.

And as a last note, it would be neat to set up some sort of security bug bounty program, like many of the major commercial software companies have these days. If anybody knows a funder or company who wants to help make that happen, please talk to them.

Anonymous

November 09, 2014

Permalink

I'm going tor over tor mode with direct guard being a bridge, until this whole thing blows over.

Using a bridge maybe a good idea, but keep in mind that bridges are given much less attention than the public relays. At this point, they are only best-effort. (This is slowly improving, though.)

As for Tor-over-Tor, this may not be such a good idea. It increases the length of your circuit but it doesn't necessarily improve the anonymity properties. Remember that the when the second instance of Tor creates its circuit, it has no idea which relays were chosen by the first instance. If both instances of Tor choose the same node(s) for the circuit, it becomes easier to execute some attacks.

Tor over tor is a poor man's implementation of guard pinning. A cheap way of protecting against RP DESTROY. I think it might have worked against RELAY_EARLY too, but that is old history now.

Downsides as you said, a relay being used twice, and increasing the number of relays involved, thus increasing the chance of a bad one being involved. I'm still trying to iron it out.

inb4 #2667