Research problem: measuring the safety of the Tor network

We need a better understanding of how much anonymity the Tor network provides against a partial network adversary who observes and/or operates some of the network. Specifically, we want to understand the chances that this class of adversary can observe a user's traffic coming into the network and also the corresponding traffic exiting the network.

Most of our graphs historically have looked at the total number of relays over time. But this simple scalar doesn't take into account relay capacity, exit policies, geographic or network location, etc.

One concrete problem from this bad metric is that when many new relays show up (e.g. due to a political event), we really don't have any insight about how the diversity of the new relays compares to the rest of the network. The simplistic metric also gets misused in academic research papers to produce misleading results like "I can sign up 2% of the Tor relays and capture 50% of the paths," when what they mean is "I can provide half the capacity in the Tor network and capture half the paths."

There are four parts to this research problem. First, we need a broad array of metrics that reflect various instances of our partial network adversary. Second, we need to understand for each metric how stable it is (sensitivity analysis) and what changes in the Tor network would efficiently improve (or harm) it. Third, we should make the calculations as realistic as possible based on actual Tor client behavior. Last, we need a better infrastructure for comparing the safety impact from alternate design scenarios, for example different path selection or route balancing algorithms.

We'd love to work with you to help answer these questions.

Part one: new metrics

The network graphs on our metrics site show the number of relays and advertised capacity over time. More importantly, we've archived all the relay descriptors and network consensus documents going back to 2004. So now we can ask some questions to better capture the safety of the network over time.

What is the entropy of path selection over time? When we add really fast relays, the capacity of the network goes up but the safety of the network can go down because it gets less balanced. We can start with a simple approximation of a Tor client's path selection behavior — e.g., choose the first hop randomly weighted by bandwidth, and choose the final hop randomly weighted by bandwidth from among the nodes that can exit to port 80. I'm particularly interested in the entropy of the first hop and the third hop, since those are the key points for the anonymity Tor provides against a partial network adversary. Calculating entropy on the probability distribution of possible first hops is easy (as is last hops), and it's worth looking at these individual components so we have better intuition about where our anonymity comes from. Then we'd also want to look at the unified metric (probability distribution of first cross last), which in the simple approximation is the sum of the two entropies.

Then we should do the same entropy calculations, but lumping relays together based on various shared characteristics. For example, country diversity: what's the entropy of path selection over time when you treat all relays in Germany as one big relay, all relays in France as one big relay, etc? How about autonomous system (AS) based diversity, where all the relays in AT&T's network are lumped together?

Another angle to evaluate is what expected fraction of paths have the first and last hop in the same country, or the same AS, or the same city, or the same continent. What expected fraction of paths cross at least one ocean? How about looking at the AS-level paths between relays, like the research from Feamster or Edman? What user locations are more safe or less safe in the above metrics?

It's plausible to imagine we can also gain some intuition when looking at the possible diversity rather than the entropy. How many ASes or countries total are represented in the network at a given time?

The goal here isn't to come up with the one true metric for summarizing Tor's safety. Rather, we need to recognize that there are many threat models to consider at once, so we need many different views into what types of safety the network can offer.

Part two: how robust are these metrics?

For each of the above metrics, how stable are they when you add or remove a few relays? Are certain relays critical to the safety of the Tor network? One way to look at this question is to graph the fall-off of safety as you remove relays from the network, in one case choosing victims randomly and in another choosing them to optimally harm the network. In the latter case, I expect it will look pretty dire. Then look at the same scenario, but now look at the safety of the network as you remove capacity from the network. For example, consider that the adversary's cost of removing a relay is equal to its capacity. Then explore these attacks again, but rather than looking at attacking individual relays look at attacks on ASes, countries, or continents.

Then look at the robustness over time: is an adversary that can knock out X% of the capacity in the network (for various values of X) getting more or less influential as the network grows? How about an adversary who can knock out an absolute capacity of K bytes for various values of K?

From the other direction, are there certain geographic or network locations where adding relays with a given capacity will most influence your safety metrics? This influence could be positive ("where should I put my relay to best help the Tor network?") or negative ("where should I put my relay to best attack users?").

Is there any relationship between the rate of new relay arrival (e.g. during political events or after popular conferences) and the level of diversity of these relays?

Part three: how good was the simple approximation?

While the above calculations assume a simplified path selection model, the reality is more complex. Clients avoid using more than one relay from a given "/16" network or family in their paths. They only use relays with the Guard flag for their first hop. They read weighting parameters from the consensus and use them to avoid Guard-flagged nodes for positions other than the first hop and avoid Exit-flagged nodes for positions other than the last hop, in proportion to how much capacity is available in each category. They track how long it takes them to build circuits, and preemptively discard the slowest 20% of their paths.

Accounting for all of these behaviors (especially the last one) will be hard, and you'll want to work closely with the Tor developers to make sure you're moving in the right direction. But even if you don't handle all of them, having a more realistic sense of how clients behave should allow you to better capture the safety in the live Tor network. If you want extra feedback, you can compare your path selection predictions with paths that Tor chooses in practice (full data here).

We should rerun the above experiments with the more accurate client behavior, and see if any of the results change substantially, and if so, why. These changes are great places to recognize and reconsider tradeoffs between what we should do for maximum safety according to these metrics vs what we should do to optimize other metrics like performance and vulnerability to other attacks. Some of these differences are intentional; for example, we don't give the Guard flag to every relay because we want to reduce the churn of entry guards. But others are due to design mistakes; for example, we are probably not giving the Guard flag out as broadly as we should, and I imagine that really hurts the network's safety.

How far away is the "optimal" approximation curve you produced in part one from the "in practice" curve here? How does the divergence from the optimal look over time? That is, were we at 50% of optimal back in 2007, but now we're only at 20%?

Part four: consider alternate designs

Once we've looked at how safe the Tor network has been over time for our broad array of metrics, we should do experiments to evaluate alternate network designs.

Examples include:

A) If we give out Guard flags according to some other algorithm, how much safety can we get back?

B) In Tor 0.2.1.x we started load balancing based on active bandwidth measurements, so we use the bandwidth weights in the network consensus rather than the weights in each relay descriptor. We know that change improved performance, but at what cost to safety?

C) What happens to the network diversity if we put a low cap on the bandwidth weighting of any node that hasn't been assigned a measurement by the bandwidth authorities yet, to protect against relays that lie about their bandwidth? If there's no real effect, we should do it; but if there's a large effect, we should find a better plan.

D) If we move forward with our plans to discard all relays under a given bandwidth capacity, how will various choices of bandwidth threshold impact the network's safety?

E) How about if we discard the X% of paths with the highest expected latency, as suggested by Stephen Rollyson?

F) What if some Tor users choose their paths to optimize a different network property (like latency or jitter), as suggested by Micah Sherr? The infrastructure you've built to measure diversity here should apply there too.

G) If only a given fraction of exit relays support IPv6, how much does it reduce your anonymity to be going to an IPv6-only destination?

We've put a lot of effort in the past year into understanding how to improve Tor's performance, but many of these design changes involve trading off safety for performance, and we still have very little understanding about how much safety Tor offers in the first place. Please help!

If you like Java, check out the metrics database tool (and its manual) that Karsten maintains.

I am wondering for what AS stands for?

Autonomous system. Think ISP or "internet zone".

http://en.wikipedia.org/wiki/Autonomous_system_%28Internet%29

I've clarified it in the above text too. Thanks!

May I suggest Tor begin its own journal, like the R project's? In my experience, a lot of research is simply shelved because academia sees no "use" for it... Maybe it's just me. Another advantage (especially with the open access model) would be that current research which would otherwise take years to complete and publish could be shared quickly. Would Tor be interested in "amateur" research?

The main academic publication venue for anonymity research is the Privacy Enhancing Technologies Symposium (http://petsymposium.org/). But you're right that the requirements for successful academic publication often have little to do with the quality or relevance of the research.

Karsten has been putting up tech reports at http://metrics.torproject.org/papers.html as he finishes them.

And some of the more important papers Tor has written lately, like the blocking-resistance design and the performance docs, aren't in the right format for formal submission, and I'm just fine with that:
https://svn.torproject.org/svn/projects/design-paper/blocking.pdf
https://blog.torproject.org/blog/why-tor-is-slow

We get pretty good peer review from the PETS community because we're high-profile.

As for "amateur" research, sure, so long as the results themselves are convincing!

Thanks for the reply, and the pointers, I'll be sure to try submitting to PETS.

Regarding the research topic, I've only begun looking into it, but this article seems to be a good place to start and gather ideas from:

Vardi, Y. (1996). Network Tomography: Estimating source-destination traffic intensities from link data. JASA Theory and Method, 91 (433), 365-377.

Best wishes to the Tor Project!

How about you concentrate resources on fixing the fact that Tor is trivially controlled by third parties first before engaging in studies into how secure the current broken design is?

Really does seem like you've disappeared up your own assholes these days. Too scared to upset the masters watching eye, no doubt.

Well, it depends what you mean by trivially controlled by. If you mean that the current Tor network is too small compared to the capacity that a well-funded adversary could dump into it, I want to know just how small that is.

If you mean that we should run the whole network ourselves so it's harder to attack, I disagree. There are plenty of networks out there that are centralized, and that centralization introduces critical problems of its own.

If you mean the blocking-resistance question ("Iran can censor connections to the Tor network"), I agree that's a worthwhile problem to tackle, and we are tackling it (stay tuned), but it isn't this one.

As for concentrating our resources, many academic research groups ask me what they and their students should look at, so fleshing out the question of diversity seems like a great avenue for them to look at.

As for a "fixed" Tor design, we're all ears.

I personally like packets routed through as many countries as possible. Preferably through countries that don't like each other. :)

Thank you for providing this vital anonymity service. There's little doubt it will become even more vital in the near feature.

It might help if people are routed through more than one network hub. This uses probability to protect people but might interfere with capacity as it could overload the system. I don't know anything about the mechanics of how this is done. I am not a CS person I studied math.

Going through 2 hubs:
If you assume the probability of a hub being compromised is 2% than a single jump has a 1/50 chance of recording your internet use. If you go through two independent hubs that chance that they are both owned by an organization spying on you is reduced to 0.04% (as .02*.02=.0004).

This analysis assumes that the organization is able to put the information together. If for instance the (.02) threat was divided between 5 different players the probability that the same organization would own both compromised hubs is less than 0.04% as this is the total threat of accessing two compromised hubs. Assuming Iran and Egypt both do not play well with others; they might not actually combine their information to spy on web traffic effectively.

I choose safety over performance. I hope nobody is trying to voip over Tor. My understanding is the network is meant for web browsing and irc mostly.

More people operate middle relays than entrance or exit relays. Assigning many more entry guard flags to relays running on residential modems is a great idea. Currently I believe entry guard flags is something only handed out through the Tor community, not given to everyone.

Because there's more middle relays it would seem like they would be the key point for adding padded data to throw off an adversary controlling both entrance and exit relays. Middle relays need a constant flow of traffic running through them to make timing attacks between entrance and exit more difficult. Maybe even a small transmission delay at middle relays could help prevent this.

Tor states padding and transmission delays cause too much performance loss and aren't used. It probably does but if that's what it takes for safety, I say go for it. More people are willing to run entrance and middle relays because there's less risk of legal backlash, such as being served DMCA infringement notices.

Take advantage of the huge number of entry and middle relays that can be created and come up with an algorithm which selects the most random path imaginable and keeps a client-side record of relays used to build paths in the past. This record could prevent previously built path combination from being used over again. Adding to randomness.

I agree having an algorithm that puts weight on high bandwidth over the safety of random paths is prone to abuse. I would rather the path algorithm put weight on randomness rather than speed.

Hope I don't sound patronizing. I'm sure your more knowledgeable than me, but even I can imagine how a determined and well funded adversary could compare transmission sizes and timing info. from router logs at major ISPs. All without ever running a single Tor relay. Just off major ISP router logs!

Seems like it would be harder to monitor Tor with random byte padding and timing delays. Unless I'm missing something, perhaps it's time to implement this stuff.

Data padding for sure, because I've heard it's possible to tell what web page Tor users visit simply by comparing the webpage size to the encrypted Tor stream size. No need to crack encryption, just compare sizes!

Thanks for all your hard work. I can't do it so I depend on the Tor people. I truly am grateful and believe your efforts at anonymity are sincere.

The data is very good.Thanks for the post and I’ll bookmark your site to read again.

I think a lot of problems regarding Tor problems in recent months comes from your side because you all as academicians and scientists reveal all your plans and secrets in your academic papers.I myself am a scientist,but I have the advantage of living long times in both democratic and dictatorship societies.You have to understand that you are in a war ( technologically-wise ) and you MUST NOT REVEAL your point of attack to your adversary ( or your enemy,if you will ).
Your adversary reads carefully and analyze thoroughly your papers and information you so carelessly reveal.Again,DO NOT FORGET THAT IT IS A WAR AND IN A WAR YOU MUST SAFEGUARD ALL YOUR SENSITIVE PLANS AND POINTS OF ATTACKS .

I would like to see paths viewed from an MPLS/WDM path layer. Even AS level views expose merely administrative domains, many of which may traverse a single SVC / fiber / right-of-way.

While the number of attackers who can position at such backbone layers are small, the advantages of analysis at this level are significant; it would be interesting to quantify per discussion above.

(Consider the aggregation at trans-oceanic fiber landings and large IX facilities.)

There is a threat of hi-tech targeted totalitarianism but it only has resources for so many targets. I would describe it as the digital continuity of the covert personality destruction programs of the last century, whose targets have delivered their testimonies in the United States of North America. Choice of target is not limited along lines of religion or nationality or lack thereof, but it is systematically targeting the most advanced political idealists on the digital frontier. Homeschooled people are at the highest risk.

It is kind of politically suicidal because it attacks potential allies turning them into determined enemies. Ironically the best method to verify whether targeted totalitarianism is surrounding you with artificial synchronicities is to fake an orgasm on the net. No individual who is not a bigot will work for these programs, so the feedback will be significant. Should you happen to be on their list, they do not hesitate to seize an opportunity for personality destruction even when it might be a trap, and that betrays them.

These programs are an irresponsible waste of dignity and resources in the first place so there is nothing in them that could stop them. Except those of us who feel they will never surrender to plausible deniability. It must end. Finally.

From the adversary perspective the most interesting point for a two-front attack against the network would be the chronological footprint an user leaves on a website. How many pages are requested over what period of time and how much time is passing between those requests? It seems that the probability to successfully correlate this information is growing exponentially with the size of the footprint. The very habit of real time browsing can be treacherous. Are there any solutions to this other than avoid behaving like in the privacy of one's own home?

what about relay operated by same organization, but distributed in many country for example, there is tens of relays called Banadora 01 ,Banadora 02, Banadora 0X
and each one is in different country, so it is possible that i fount more than one Banadora 0x in my bath, and if there was one as entry and other at exit node then my connection is unmasked for this organization,

an other case, what if this organization renamed it servers, to different non related names?
is it possible to make the connection goes in simultaneous stream, so only part of the data goes through certain relay, i am talk about adding option of: make more than one active circuit at certain moment, and dividing data to segments, and send each segment through different path (circuit), thus if bad exit node got a segment< then it got only part of the data, (at least they know only the user name, not both the user name and password!)

i want to say, thanks for people of tor who care about people in poor county,,

I am a "half-geek" having been around mini/micro PCs for over 30 years and have just started using Tor. I have tried to sell Tor to some of my more PC knowledgeable friends but they balk at the Options section, finding that referring to your site for how each works and what it does. Printing the web page is a mess.

I think that if you would make the Installation and Manual pages available a .pdf files this would make Tor attractive to a much wider "clientele".

Tor is illegal because hiding you ip is illegal, my ICT teacher said so.......don't be offended is I am wrong

If this site is so legal, why does it use 256-bit encryption

There is no real evedence TOR is legal

give me legal evidence TOR is legal

Is tor legal In the UK

Syndicate content Syndicate content