arma's blog

The New Research from Northeastern University

We’ve been speaking to journalists who are curious about a HotPETS 2016 talk from last week: the HOnions: Towards Detection and Identification of Misbehaving Tor HSDirs research paper conducted by our colleagues at Northeastern University. Here's a short explanation, written by Donncha and Roger.

Internally, Tor has a system for identifying bad relays. When we find a bad relay, we throw it out of the network.

But our techniques for finding bad relays aren't perfect, so it's good that there are other researchers also working on this problem. Acting independently, we had already detected and removed many of the suspicious relays that these researchers have found.

The researchers have sent us a list of the other relays that they found, and we're currently working on confirming that they are bad. (This is tougher than it sounds, since the technique used by the other research group only detects that relays *might* be bad, so we don't know which ones to blame for sure.)

It's especially great to have this other research group working on this topic, since their technique for detecting bad relays is different from our technique, and that means better coverage.

As far as we can tell, the misbehaving relays' goal in this case is just to discover onion addresses that they wouldn't be able to learn other ways—they aren't able to identify the IP addresses of hosts or visitors to Tor hidden services.

The authors here are not trying to discover new onion addresses. They are trying to detect other people who are learning about onion addresses by running bad HSDirs/relays.

This activity only allows attackers to discover new onion addresses. It does not impact the anonymity of hidden services or hidden service clients.

We have known about and been defending against this situation for quite some time. The issue will be resolved more thoroughly with the next-generation hidden services design. Check out our blog post, Mission: Montreal!

Statement from the Tor Project re: the Court's February 23 Order in U.S. v. Farrell

Journalists have been asking us for our thoughts about a recent pdf about a judge deciding that a defendant shouldn't get any more details about how the prosecutors decided to prosecute him. Here is the statement we wrote for them:

"We read with dismay the Western Washington District Court's Order on Defendant's Motion to Compel issued on February 23, 2016, in U.S. v. Farrell. The Court held "Tor users clearly lack a reasonable expectation of privacy in their IP addresses while using the Tor network." It is clear that the court does not understand how the Tor network works. The entire purpose of the network is to enable users to communicate privately and securely. While it is true that users "disclose information, including their IP addresses, to unknown individuals running Tor nodes," that information gets stripped from messages as they pass through Tor's private network pathways.

This separation of identity from routing is key to why the court needs to consider how exactly the attackers got this person's IP address. The problem is not simply that the attackers learned the user's IP address. The problem is that they appear to have also intercepted and tampered with the user's traffic elsewhere in the network, at a point where the traffic does not identify the user. They needed to attack both places in order to link the user to his destination. This separation is how Tor provides anonymity, and it is why the previous cases about IP addresses do not apply here.

The Tor network is secure and has only rarely been compromised. The Software Engineering Institute ("SEI") of Carnegie Mellon University (CMU) compromised the network in early 2014 by operating relays and tampering with user traffic. That vulnerability, like all other vulnerabilities, was patched as soon as we learned about it. The Tor network remains the best way for users to protect their privacy and security when communicating online."

Transparency, Openness, and our 2014 Financials

After completing the standard audit, our 2014 state and federal tax filings are available. We publish all of our related tax documents because we believe in transparency.

Tor's annual revenue in 2014 held steady at about $2.5 million. Tor's budget is modest considering the number of people involved and the impact we have. And it is dwarfed by the budgets that our adversaries are spending to make the world a more dangerous and less free place.

To achieve our goals, which include scaling our user base, we fund about 20 contractors and staff members (some part time, some full time) and rely on thousands of volunteers to do everything from systems administration to outreach. Our relay operators are also volunteers, and in 2014 we grew their number to almost 7,000 — helped along by the Electronic Frontier Foundation's wonderful Tor Challenge, which netted 1,635 relays. Our user base is up to several million people each day.

Transparency doesn't just mean that we show you our source code (though of course we do). The second layer to transparency is publishing specifications to explain what we thought we implemented in the source code. And the layer above that is publishing design documents and research papers to explain why we chose to build it that way, including analyzing the security implications and the tradeoffs of alternate designs. The reason for all these layers is to help people evaluate every level of our system: whether we chose the right design, whether we turned that design into a concrete plan that will keep people safe, and whether we correctly implemented this plan. Tor gets a huge amount of analysis and attention from professors and university research groups down to individual programmers around the world, and this consistent peer review is one of our core strengths over the past decade.

As we look toward the future, we are grateful for our institutional funding, but we want to expand and diversify our funding too. The recent donations campaign is a great example of our vision for future fundraising. We are excited about the future, and we invite you to join us: donate, volunteer, and run a Tor relay.

Announcing Shari Steele as our new executive director

At long last, I am thrilled to announce that our executive director search is now successful! And what a success it is: we have our good friend Shari Steele, who led EFF for 15 years, coming on board to lead us.

We've known Shari for a long time. She led EFF's choice to fund Tor back in 2004-2005. She is also the one who helped create EFF's technology department, which has brought us HTTPS Everywhere and their various guides and tool assessments.

Tor's technical side is world-class, and I am excited that Shari will help Tor's organizational side become great too. She shares our core values, she brings leadership in managing and coordinating people, she has huge experience in growing a key non-profit in our space, and her work pioneering EFF's community-based funding model will be especially valuable as we continue our campaign to diversify our funding sources.

Tor is part of a larger family of civil liberties organizations, and this move makes it clear that Tor is a main figure in that family. Nick and I will focus short-term on shepherding a smooth transition out of our "interim" roles, and after that we are excited to get back to our old roles actually doing technical work. I'll let Shari pick up the conversation from here, in her upcoming blog post.

Please everybody join me in welcoming Shari!

Our first real donations campaign



Celebrate giving Tuesday with Tor

I am happy to tell you that Tor is running its first ever end-of-year fundraising drive. Our goal is to become more sustainable financially and less reliant on government funding. We need your help.

We've done some amazing things in recent years. The Tor network is much faster and more consistent than before. We're leading the world in pushing for adoption of reproducible builds, a system where other developers can build their own Tor Browser based on our code to be sure that it is what we say it is. Tor Browser's secure updates are working smoothly.

We've provided safe Internet access to citizens whose countries enacted harsh censorship, like Turkey and Bangladesh. Our press and community outreach have supported victories like the New Hampshire library's exit relay. New releases of tools like Tor Messenger have been a hit.

When the Snowden documents and Hacking Team emails were first released, we provided technical and policy analysis that has helped the world better understand the threats to systems like Tor — and further, to people's right to privacy. Our analysis helped mobilize Internet security and civil liberties communities to take action against these threats.

We have much more work ahead of us in the coming years. First and foremost, we care about our users and the usability of our tools. We want to accelerate user growth: The Tor network sees millions of users each day, but there are tens of millions more who are waiting for it to be just a little bit faster, more accessible, or easier to install. We want to get the word out that Tor is for everyone on the planet.

We also need to focus on outreach and education, and on helping our allies who focus on public policy to succeed. Tor is still the best system in the world against large adversaries like governments, but these days the attackers are vastly outspending the defenders across the board. So in addition to keeping Tor both strong and usable, we need to provide technical advice and support to groups like EFF and ACLU while they work to rein in the parts of our governments that have gone beyond the permissions and limits that our laws meant to give them.

From an organization and community angle, we need to improve our stability by continued work on transparency and communication, strengthening our leadership, choosing our priorities well, and becoming more agile and adapting to the most important issues as they arise.

Taller mountains await after these: We need to tackle the big open anonymity problems like correlation attacks, we need to help websites learn how to engage with users who care about privacy, and we need to demonstrate to governments around the world that we don't have to choose between security and privacy.

We appreciate the help we receive from past and current funders. But ultimately, Tor as an organization will be most effective when we have the flexibility to turn to whichever issues are most pressing at the time — and that requires unrestricted funding. It's not going to happen overnight — after all, it took EFF years to get their donation campaigns going smoothly — but they've gotten there, and you can help us take these critical first steps so we can get there, too. By participating in this first campaign, you will show other people that this whole plan can work.

Tor has millions of users around the globe, and many people making modest donations can create a sustainable Tor. In fact, please make a larger donation if you can! These larger contributions form a strong foundation for our campaign and inspire others to give to Tor.

You can help our campaign thrive in three simple ways:

  • Make a donation at whatever level is possible and meaningful for you. Every contribution makes Tor stronger. Monthly donations are especially helpful because they let us make plans for the future.
  • Tell the world that you support Tor! Shout about it, tweet about it, share our posts with your community. Let everyone know that you #SupportTor. These steps encourage others to join in and help to spread the word.
  • Think about how and why Tor is meaningful in your life and consider writing or tweeting about it. Be sure to let us know so we can amplify your voice.

Beyond collecting money (which is great), I'm excited that the fundraising campaign will also double as an awareness campaign about Tor: We do amazing things, and amazing people love us, but in the past we've been too busy doing things to get around to telling everyone about them.

We have some great champions lined up over the coming days and weeks to raise awareness and to showcase the diversity of people who value Tor. Please help the strongest privacy tool in the world become more sustainable!

Did the FBI Pay a University to Attack Tor Users?

The Tor Project has learned more about last year's attack by Carnegie Mellon researchers on the hidden service subsystem. Apparently these researchers were paid by the FBI to attack hidden services users in a broad sweep, and then sift through their data to find people whom they could accuse of crimes. We publicized the attack last year, along with the steps we took to slow down or stop such an attack in the future:
https://blog.torproject.org/blog/tor-security-advisory-relay-early-traffic-confirmation-attack/

Here is the link to their (since withdrawn) submission to the Black Hat conference:
https://web.archive.org/web/20140705114447/http://blackhat.com/us-14/briefings.html#you-dont-have-to-be-the-nsa-to-break-tor-deanonymizing-users-on-a-budget
along with Ed Felten's analysis at the time:
https://freedom-to-tinker.com/blog/felten/why-were-cert-researchers-attacking-tor/

We have been told that the payment to CMU was at least $1 million.

There is no indication yet that they had a warrant or any institutional oversight by Carnegie Mellon's Institutional Review Board. We think it's unlikely they could have gotten a valid warrant for CMU's attack as conducted, since it was not narrowly tailored to target criminals or criminal activity, but instead appears to have indiscriminately targeted many users at once.

Such action is a violation of our trust and basic guidelines for ethical research. We strongly support independent research on our software and network, but this attack crosses the crucial line between research and endangering innocent users.

This attack also sets a troubling precedent: Civil liberties are under attack if law enforcement believes it can circumvent the rules of evidence by outsourcing police work to universities. If academia uses "research" as a stalking horse for privacy invasion, the entire enterprise of security research will fall into disrepute. Legitimate privacy researchers study many online systems, including social networks — If this kind of FBI attack by university proxy is accepted, no one will have meaningful 4th Amendment protections online and everyone is at risk.

When we learned of this vulnerability last year, we patched it and published the information we had on our blog:
https://blog.torproject.org/blog/tor-security-advisory-relay-early-traffic-confirmation-attack/

We teach law enforcement agents that they can use Tor to do their investigations ethically, and we support such use of Tor — but the mere veneer of a law enforcement investigation cannot justify wholesale invasion of people's privacy, and certainly cannot give it the color of "legitimate research".

Whatever academic security research should be in the 21st century, it certainly does not include "experiments" for pay that indiscriminately endanger strangers without their knowledge or consent.

A technical summary of the Usenix fingerprinting paper

Albert Kwon, Mashael AlSabah, and others have a paper entitled Circuit Fingerprinting Attacks: Passive Deanonymization of Tor Hidden Services at the upcoming Usenix Security symposium in a few weeks. Articles describing the paper are making the rounds currently, so I'm posting a technical summary here, along with explanations of the next research questions that would be good to answer. (I originally wrote this summary for Dan Goodin for his article at Ars Technica.) Also for context, remember that this is another research paper in the great set of literature around anonymous communication systems—you can read many more at http://freehaven.net/anonbib/.

"This is a well-written paper. I enjoyed reading it, and I'm glad the researchers are continuing to work in this space.

First, for background, run (don't walk) to Mike Perry's blog post explaining why website fingerprinting papers have historically overestimated the risks for users:
https://blog.torproject.org/blog/critique-website-traffic-fingerprinting...
and then check out Marc Juarez et al's followup paper from last year's ACM CCS that backs up many of Mike's concerns:
http://freehaven.net/anonbib/#ccs2014-critical

To recap, this new paper describes three phases. In the first phase, they hope to get lucky and end up operating the entry guard for the Tor user they're trying to target. In the second phase, the target user loads some web page using Tor, and they use a classifier to guess whether the web page was in onion-space or not. Lastly, if the first classifier said "yes it was", they use a separate classifier to guess which onion site it was.

The first big question comes in phase three: is their website fingerprinting classifier actually accurate in practice? They consider a world of 1000 front pages, but ahmia.fi and other onion-space crawlers have found millions of pages by looking beyond front pages. Their 2.9% false positive rate becomes enormous in the face of this many pages—and the result is that the vast majority of the classification guesses will be mistakes.

For example, if the user loads ten pages, and the classifier outputs a guess for each web page she loads, will it output a stream of "She went to Facebook!" "She went to Riseup!" "She went to Wildleaks!" while actually she was just reading posts in a Bitcoin forum the whole time? Maybe they can design a classifier that works well when faced with many more web pages, but the paper doesn't show one, and Marc Juarez's paper argues convincingly that it's hard to do.

The second big question is whether adding a few padding cells would fool their "is this a connection to an onion service" classifier. We haven't tried to hide that in the current Tor protocol, and the paper presents what looks like a great classifier. It's not surprising that their classifier basically stops working in the face of more padding though: classifiers are notoriously brittle when you change the situation on them. So the next research step is to find out if it's easy or hard to design a classifier that isn't fooled by padding.

I look forward to continued attention by the research community to work toward answers to these two questions. I think it would be especially fruitful to look also at true positive rates and false positives of both classifiers together, which might show more clearly (or not) that a small change in the first classifier has a big impact on foiling the second classifier. That is, if we can make it even a little bit more likely that the "is it an onion site" classifier guesses wrong, we could make the job of the website fingerprinting classifier much harder because it has to consider the billions of pages on the rest of the web too."

Preliminary analysis of Hacking Team's slides

A few weeks ago, Hacking Team was bragging publicly about a Tor Browser exploit. We've learned some details of their proposed attack from a leaked powerpoint presentation that was part of the Hacking Team dump.

The good news is that they don't appear to have any exploit on Tor or on Tor Browser. The other good news is that their proposed attack doesn't scale well. They need to put malicious hardware on the local network of their target user, which requires choosing their target, locating her, and then arranging for the hardware to arrive in the right place. So it's not really practical to launch the attack on many Tor users at once.

But they actually don't need an exploit on Tor or Tor Browser. Here's the proposed attack in a nutshell:

1) Pick a target user (say, you), figure out how you connect to the Internet, and install their attacking hardware on your local network (e.g. inside your ISP).

2) Wait for you to browse the web without Tor Browser, i.e. with some other browser like Firefox or Chrome or Safari, and then insert some sort of exploit into one of the web pages you receive (maybe the Flash 0-day we learned about from the same documents, or maybe some other exploit).

3) Once they've taken control of your computer, they configure your Tor Browser to use a socks proxy on a remote computer that they control. In effect, rather than using the Tor client that's part of Tor Browser, you'll be using their remote Tor client, so they get to intercept and watch your traffic before it enters the Tor network.

You have to stop them at step two, because once they've broken into your computer, they have many options for attacking you from there.

Their proposed attack requires Hacking Team (or your government) to already have you in their sights. This is not mass surveillance — this is very targeted surveillance.

Another answer is to run a system like Tails, which avoids interacting with any local resources. In this case there should be no opportunity to insert an exploit from the local network. But that's still not a complete solution: some coffeeshops, hotels, etc will demand that you interact with their local login page before you can access the Internet. Tails includes what they call their 'unsafe' browser for these situations, and you're at risk during that brief period when you use it.

Ultimately, security here comes down to having safer browsers. We continue to work on ways to make Tor Browser more resilient against attacks, but the key point here is that they'll go after the weakest link on your system — and at least in the scenarios they describe, Tor Browser isn't the weakest link.

As a final point, note that this is just a powerpoint deck (probably a funding pitch), and we've found no indication yet that they ever followed through on their idea.

We'll update you with more information if we learn anything further. Stay safe out there!

Syndicate content Syndicate content