arma's blog

Transparency, Openness, and our 2014 Financials

After completing the standard audit, our 2014 state and federal tax filings are available. We publish all of our related tax documents because we believe in transparency.

Tor's annual revenue in 2014 held steady at about $2.5 million. Tor's budget is modest considering the number of people involved and the impact we have. And it is dwarfed by the budgets that our adversaries are spending to make the world a more dangerous and less free place.

To achieve our goals, which include scaling our user base, we fund about 20 contractors and staff members (some part time, some full time) and rely on thousands of volunteers to do everything from systems administration to outreach. Our relay operators are also volunteers, and in 2014 we grew their number to almost 7,000 — helped along by the Electronic Frontier Foundation's wonderful Tor Challenge, which netted 1,635 relays. Our user base is up to several million people each day.

Transparency doesn't just mean that we show you our source code (though of course we do). The second layer to transparency is publishing specifications to explain what we thought we implemented in the source code. And the layer above that is publishing design documents and research papers to explain why we chose to build it that way, including analyzing the security implications and the tradeoffs of alternate designs. The reason for all these layers is to help people evaluate every level of our system: whether we chose the right design, whether we turned that design into a concrete plan that will keep people safe, and whether we correctly implemented this plan. Tor gets a huge amount of analysis and attention from professors and university research groups down to individual programmers around the world, and this consistent peer review is one of our core strengths over the past decade.

As we look toward the future, we are grateful for our institutional funding, but we want to expand and diversify our funding too. The recent donations campaign is a great example of our vision for future fundraising. We are excited about the future, and we invite you to join us: donate, volunteer, and run a Tor relay.

Announcing Shari Steele as our new executive director

At long last, I am thrilled to announce that our executive director search is now successful! And what a success it is: we have our good friend Shari Steele, who led EFF for 15 years, coming on board to lead us.

We've known Shari for a long time. She led EFF's choice to fund Tor back in 2004-2005. She is also the one who helped create EFF's technology department, which has brought us HTTPS Everywhere and their various guides and tool assessments.

Tor's technical side is world-class, and I am excited that Shari will help Tor's organizational side become great too. She shares our core values, she brings leadership in managing and coordinating people, she has huge experience in growing a key non-profit in our space, and her work pioneering EFF's community-based funding model will be especially valuable as we continue our campaign to diversify our funding sources.

Tor is part of a larger family of civil liberties organizations, and this move makes it clear that Tor is a main figure in that family. Nick and I will focus short-term on shepherding a smooth transition out of our "interim" roles, and after that we are excited to get back to our old roles actually doing technical work. I'll let Shari pick up the conversation from here, in her upcoming blog post.

Please everybody join me in welcoming Shari!

Our first real donations campaign



Celebrate giving Tuesday with Tor

I am happy to tell you that Tor is running its first ever end-of-year fundraising drive. Our goal is to become more sustainable financially and less reliant on government funding. We need your help.

We've done some amazing things in recent years. The Tor network is much faster and more consistent than before. We're leading the world in pushing for adoption of reproducible builds, a system where other developers can build their own Tor Browser based on our code to be sure that it is what we say it is. Tor Browser's secure updates are working smoothly.

We've provided safe Internet access to citizens whose countries enacted harsh censorship, like Turkey and Bangladesh. Our press and community outreach have supported victories like the New Hampshire library's exit relay. New releases of tools like Tor Messenger have been a hit.

When the Snowden documents and Hacking Team emails were first released, we provided technical and policy analysis that has helped the world better understand the threats to systems like Tor — and further, to people's right to privacy. Our analysis helped mobilize Internet security and civil liberties communities to take action against these threats.

We have much more work ahead of us in the coming years. First and foremost, we care about our users and the usability of our tools. We want to accelerate user growth: The Tor network sees millions of users each day, but there are tens of millions more who are waiting for it to be just a little bit faster, more accessible, or easier to install. We want to get the word out that Tor is for everyone on the planet.

We also need to focus on outreach and education, and on helping our allies who focus on public policy to succeed. Tor is still the best system in the world against large adversaries like governments, but these days the attackers are vastly outspending the defenders across the board. So in addition to keeping Tor both strong and usable, we need to provide technical advice and support to groups like EFF and ACLU while they work to rein in the parts of our governments that have gone beyond the permissions and limits that our laws meant to give them.

From an organization and community angle, we need to improve our stability by continued work on transparency and communication, strengthening our leadership, choosing our priorities well, and becoming more agile and adapting to the most important issues as they arise.

Taller mountains await after these: We need to tackle the big open anonymity problems like correlation attacks, we need to help websites learn how to engage with users who care about privacy, and we need to demonstrate to governments around the world that we don't have to choose between security and privacy.

We appreciate the help we receive from past and current funders. But ultimately, Tor as an organization will be most effective when we have the flexibility to turn to whichever issues are most pressing at the time — and that requires unrestricted funding. It's not going to happen overnight — after all, it took EFF years to get their donation campaigns going smoothly — but they've gotten there, and you can help us take these critical first steps so we can get there, too. By participating in this first campaign, you will show other people that this whole plan can work.

Tor has millions of users around the globe, and many people making modest donations can create a sustainable Tor. In fact, please make a larger donation if you can! These larger contributions form a strong foundation for our campaign and inspire others to give to Tor.

You can help our campaign thrive in three simple ways:

  • Make a donation at whatever level is possible and meaningful for you. Every contribution makes Tor stronger. Monthly donations are especially helpful because they let us make plans for the future.
  • Tell the world that you support Tor! Shout about it, tweet about it, share our posts with your community. Let everyone know that you #SupportTor. These steps encourage others to join in and help to spread the word.
  • Think about how and why Tor is meaningful in your life and consider writing or tweeting about it. Be sure to let us know so we can amplify your voice.

Beyond collecting money (which is great), I'm excited that the fundraising campaign will also double as an awareness campaign about Tor: We do amazing things, and amazing people love us, but in the past we've been too busy doing things to get around to telling everyone about them.

We have some great champions lined up over the coming days and weeks to raise awareness and to showcase the diversity of people who value Tor. Please help the strongest privacy tool in the world become more sustainable!

Did the FBI Pay a University to Attack Tor Users?

The Tor Project has learned more about last year's attack by Carnegie Mellon researchers on the hidden service subsystem. Apparently these researchers were paid by the FBI to attack hidden services users in a broad sweep, and then sift through their data to find people whom they could accuse of crimes. We publicized the attack last year, along with the steps we took to slow down or stop such an attack in the future:
https://blog.torproject.org/blog/tor-security-advisory-relay-early-traffic-confirmation-attack/

Here is the link to their (since withdrawn) submission to the Black Hat conference:
https://web.archive.org/web/20140705114447/http://blackhat.com/us-14/briefings.html#you-dont-have-to-be-the-nsa-to-break-tor-deanonymizing-users-on-a-budget
along with Ed Felten's analysis at the time:
https://freedom-to-tinker.com/blog/felten/why-were-cert-researchers-attacking-tor/

We have been told that the payment to CMU was at least $1 million.

There is no indication yet that they had a warrant or any institutional oversight by Carnegie Mellon's Institutional Review Board. We think it's unlikely they could have gotten a valid warrant for CMU's attack as conducted, since it was not narrowly tailored to target criminals or criminal activity, but instead appears to have indiscriminately targeted many users at once.

Such action is a violation of our trust and basic guidelines for ethical research. We strongly support independent research on our software and network, but this attack crosses the crucial line between research and endangering innocent users.

This attack also sets a troubling precedent: Civil liberties are under attack if law enforcement believes it can circumvent the rules of evidence by outsourcing police work to universities. If academia uses "research" as a stalking horse for privacy invasion, the entire enterprise of security research will fall into disrepute. Legitimate privacy researchers study many online systems, including social networks — If this kind of FBI attack by university proxy is accepted, no one will have meaningful 4th Amendment protections online and everyone is at risk.

When we learned of this vulnerability last year, we patched it and published the information we had on our blog:
https://blog.torproject.org/blog/tor-security-advisory-relay-early-traffic-confirmation-attack/

We teach law enforcement agents that they can use Tor to do their investigations ethically, and we support such use of Tor — but the mere veneer of a law enforcement investigation cannot justify wholesale invasion of people's privacy, and certainly cannot give it the color of "legitimate research".

Whatever academic security research should be in the 21st century, it certainly does not include "experiments" for pay that indiscriminately endanger strangers without their knowledge or consent.

A technical summary of the Usenix fingerprinting paper

Albert Kwon, Mashael AlSabah, and others have a paper entitled Circuit Fingerprinting Attacks: Passive Deanonymization of Tor Hidden Services at the upcoming Usenix Security symposium in a few weeks. Articles describing the paper are making the rounds currently, so I'm posting a technical summary here, along with explanations of the next research questions that would be good to answer. (I originally wrote this summary for Dan Goodin for his article at Ars Technica.) Also for context, remember that this is another research paper in the great set of literature around anonymous communication systems—you can read many more at http://freehaven.net/anonbib/.

"This is a well-written paper. I enjoyed reading it, and I'm glad the researchers are continuing to work in this space.

First, for background, run (don't walk) to Mike Perry's blog post explaining why website fingerprinting papers have historically overestimated the risks for users:
https://blog.torproject.org/blog/critique-website-traffic-fingerprinting...
and then check out Marc Juarez et al's followup paper from last year's ACM CCS that backs up many of Mike's concerns:
http://freehaven.net/anonbib/#ccs2014-critical

To recap, this new paper describes three phases. In the first phase, they hope to get lucky and end up operating the entry guard for the Tor user they're trying to target. In the second phase, the target user loads some web page using Tor, and they use a classifier to guess whether the web page was in onion-space or not. Lastly, if the first classifier said "yes it was", they use a separate classifier to guess which onion site it was.

The first big question comes in phase three: is their website fingerprinting classifier actually accurate in practice? They consider a world of 1000 front pages, but ahmia.fi and other onion-space crawlers have found millions of pages by looking beyond front pages. Their 2.9% false positive rate becomes enormous in the face of this many pages—and the result is that the vast majority of the classification guesses will be mistakes.

For example, if the user loads ten pages, and the classifier outputs a guess for each web page she loads, will it output a stream of "She went to Facebook!" "She went to Riseup!" "She went to Wildleaks!" while actually she was just reading posts in a Bitcoin forum the whole time? Maybe they can design a classifier that works well when faced with many more web pages, but the paper doesn't show one, and Marc Juarez's paper argues convincingly that it's hard to do.

The second big question is whether adding a few padding cells would fool their "is this a connection to an onion service" classifier. We haven't tried to hide that in the current Tor protocol, and the paper presents what looks like a great classifier. It's not surprising that their classifier basically stops working in the face of more padding though: classifiers are notoriously brittle when you change the situation on them. So the next research step is to find out if it's easy or hard to design a classifier that isn't fooled by padding.

I look forward to continued attention by the research community to work toward answers to these two questions. I think it would be especially fruitful to look also at true positive rates and false positives of both classifiers together, which might show more clearly (or not) that a small change in the first classifier has a big impact on foiling the second classifier. That is, if we can make it even a little bit more likely that the "is it an onion site" classifier guesses wrong, we could make the job of the website fingerprinting classifier much harder because it has to consider the billions of pages on the rest of the web too."

Preliminary analysis of Hacking Team's slides

A few weeks ago, Hacking Team was bragging publicly about a Tor Browser exploit. We've learned some details of their proposed attack from a leaked powerpoint presentation that was part of the Hacking Team dump.

The good news is that they don't appear to have any exploit on Tor or on Tor Browser. The other good news is that their proposed attack doesn't scale well. They need to put malicious hardware on the local network of their target user, which requires choosing their target, locating her, and then arranging for the hardware to arrive in the right place. So it's not really practical to launch the attack on many Tor users at once.

But they actually don't need an exploit on Tor or Tor Browser. Here's the proposed attack in a nutshell:

1) Pick a target user (say, you), figure out how you connect to the Internet, and install their attacking hardware on your local network (e.g. inside your ISP).

2) Wait for you to browse the web without Tor Browser, i.e. with some other browser like Firefox or Chrome or Safari, and then insert some sort of exploit into one of the web pages you receive (maybe the Flash 0-day we learned about from the same documents, or maybe some other exploit).

3) Once they've taken control of your computer, they configure your Tor Browser to use a socks proxy on a remote computer that they control. In effect, rather than using the Tor client that's part of Tor Browser, you'll be using their remote Tor client, so they get to intercept and watch your traffic before it enters the Tor network.

You have to stop them at step two, because once they've broken into your computer, they have many options for attacking you from there.

Their proposed attack requires Hacking Team (or your government) to already have you in their sights. This is not mass surveillance — this is very targeted surveillance.

Another answer is to run a system like Tails, which avoids interacting with any local resources. In this case there should be no opportunity to insert an exploit from the local network. But that's still not a complete solution: some coffeeshops, hotels, etc will demand that you interact with their local login page before you can access the Internet. Tails includes what they call their 'unsafe' browser for these situations, and you're at risk during that brief period when you use it.

Ultimately, security here comes down to having safer browsers. We continue to work on ways to make Tor Browser more resilient against attacks, but the key point here is that they'll go after the weakest link on your system — and at least in the scenarios they describe, Tor Browser isn't the weakest link.

As a final point, note that this is just a powerpoint deck (probably a funding pitch), and we've found no indication yet that they ever followed through on their idea.

We'll update you with more information if we learn anything further. Stay safe out there!

Sue Gardner and the Tor strategy project

Sue Gardner, the former executive director of the Wikimedia Foundation, has been advising Tor informally for several months. She attended Tor's most recent in-person meeting in Valencia in early March and facilitated several sessions. Starting today, and for about the next year, Sue will be working with us to help The Tor Project develop a long-term organizational strategy. The purpose of this strategy project is to work together, all of us, to develop a plan for making Tor as effective and sustainable as it can be.

Sue is a great fit for this project. In addition to being the former executive director of Wikimedia, she has been active in FLOSS communities since 2007. She's an advisor or board member with many organizations that do work related to technology and freedom, including the Wikimedia Foundation, the Sunlight Foundation, the Committee to Protect Journalists, and Global Voices. She has lots of experience developing organizational strategy, growing small organizations, raising money, handling the media, and working with distributed communities. She's a proud recipient of the Nyan Cat Medal of Internet Awesomeness for Defending Internet Freedom, and was recently given the Cultural Humanist of the year award by the Harvard Humanist Association.

We aim for this project to be inclusive and collaborative. Sue's not going to be making up a strategy for Tor herself: the idea is that she will facilitate the development of strategy, in consultation with the Tor community and Tor stakeholders (all the other people who care about Tor), as much as possible in public, probably on our wikis.

Sue's funding for this project will come via First Look Media, which also means this is a great opportunity to strengthen our connections to our friends at this non-profit organization. (You may know of them because of The Intercept.)

As she does the work, she'll be asking for participation from members of the Tor community. Please help her as much as you can.

I'm excited that we're moving forward with this project. We welcome Sue as we all work together to make security, privacy, and anonymity possible for everyone.

Tor 0.2.5.12 and 0.2.6.7 are released

Tor 0.2.5.12 and 0.2.6.7 fix two security issues that could be used by an attacker to crash hidden services, or crash clients visiting hidden services. Hidden services should upgrade as soon as possible; clients should upgrade whenever packages become available.

These releases also contain two simple improvements to make hidden services a bit less vulnerable to denial-of-service attacks.

We also made a Tor 0.2.4.27 release so that Debian stable can easily integrate these fixes.

The Tor Browser team is currently evaluating whether to put out a new Tor Browser stable release with these fixes, or wait until next week for their scheduled next stable release. (The bugs can introduce hassles for users, but we don't currently view them as introducing any threats to anonymity.)

Changes in version 0.2.5.12 - 2015-04-06

  • Major bugfixes (security, hidden service):
    • Fix an issue that would allow a malicious client to trigger an assertion failure and halt a hidden service. Fixes bug 15600; bugfix on 0.2.1.6-alpha. Reported by "disgleirio".
    • Fix a bug that could cause a client to crash with an assertion failure when parsing a malformed hidden service descriptor. Fixes bug 15601; bugfix on 0.2.1.5-alpha. Found by "DonnchaC".
  • Minor features (DoS-resistance, hidden service):
    • Introduction points no longer allow multiple INTRODUCE1 cells to arrive on the same circuit. This should make it more expensive for attackers to overwhelm hidden services with introductions. Resolves ticket 15515.

Changes in version 0.2.6.7 - 2015-04-06

  • Major bugfixes (security, hidden service):
    • Fix an issue that would allow a malicious client to trigger an assertion failure and halt a hidden service. Fixes bug 15600; bugfix on 0.2.1.6-alpha. Reported by "disgleirio".
    • Fix a bug that could cause a client to crash with an assertion failure when parsing a malformed hidden service descriptor. Fixes bug 15601; bugfix on 0.2.1.5-alpha. Found by "DonnchaC".
  • Minor features (DoS-resistance, hidden service):
    • Introduction points no longer allow multiple INTRODUCE1 cells to arrive on the same circuit. This should make it more expensive for attackers to overwhelm hidden services with introductions. Resolves ticket 15515.
    • Decrease the amount of reattempts that a hidden service performs when its rendezvous circuits fail. This reduces the computational cost for running a hidden service under heavy load. Resolves ticket 11447.
Syndicate content Syndicate content