anonymity

Improving Tor's anonymity by changing guard parameters

There are tensions in the Tor protocol design between the anonymity provided by entry guards and the performance improvements from better load balancing. This blog post walks through the research questions I raised in 2011, then summarizes answers from three recent papers written by researchers in the Tor community, and finishes by explaining what Tor design changes we need to make to provide better anonymity, and what we'll be trading off.

Part one: The research questions

In Tor, each client selects a few relays at random, and chooses only from those relays when making the first hop of each circuit. This entry guard design helps in three ways:

First, entry guards protect against the "predecessor attack": if Alice (the user) instead chose new relays for each circuit, eventually an attacker who runs a few relays would be her first and last hop. With entry guards, the risk of end-to-end correlation for any given circuit is the same, but the cumulative risk for all her circuits over time is capped.

Second, they help to protect against the "denial of service as denial of anonymity" attack, where an attacker who runs quite a few relays fails any circuit that he's a part of and that he can't win against, forcing Alice to generate more circuits and thus increasing the overall chance that the attacker wins. Entry guards greatly reduce the risk, since Alice will never choose outside of a few nodes for her first hop.

Third, entry guards raise the startup cost to an adversary who runs relays in order to trace users. Without entry guards, the attacker can sign up some relays and immediately start having chances to observe Alice's circuits. With them, new adversarial relays won't have the Guard flag so won't be chosen as the first hop of any circuit; and even once they earn the Guard flag, users who have already chosen guards won't switch away from their current guards for quite a while.

In August 2011, I posted these four open research questions around guard rotation parameters:

  1. Natural churn: For an adversary that controls a given number of relays, if the user only replaces her guards when the current ones become unavailable, how long will it take until she's picked an adversary's guard?
  2. Artificial churn: How much more risk does she introduce by intentionally switching to new guards before she has to, to load balance better?
  3. Number of guards: What are the tradeoffs in performance and anonymity from picking three guards vs two or one? By default Tor picks three guards, since if we picked only one then some clients would pick a slow one and be sad forever. On the other hand, picking only one makes users safer.
  4. Better Guard flag assignment: If we give the Guard flag to more or different relays, how much does it change all these answers?

For reference, Tor 0.2.3's entry guard behavior is "choose three guards, adding another one if two of those three go down but going back to the original ones if they come back up, and also throw out (aka rotate) a guard 4-8 weeks after you chose it." I'll discuss in "Part three" of this post what changes we should make to improve this policy.

Part two: Recent research papers

Tariq Elahi, a grad student in Ian Goldberg's group in Waterloo, began to answer the above research questions in his paper Changing of the Guards: A Framework for Understanding and Improving Entry Guard Selection in Tor (published at WPES 2012). His paper used eight months of real-world historical Tor network data (from April 2011 to December 2011) and simulated various guard rotation policies to see which approaches protect users better.

Tariq's paper considered a quite small adversary: he let all the clients pick honest guards, and then added one new small guard to the 800 or so existing guards. The question is then what fraction of clients use this new guard over time. Here's a graph from the paper, showing (assuming all users pick three guards) the vulnerability due to natural churn ("without guard rotation") vs natural churn plus also intentional guard rotation:

Vulnerability from natural vs intentional guard rotation

In this graph their tiny guard node, in the "without guard rotation" scenario, ends up getting used by about 3% of the clients in the first few months, and gets up to 10% by the eight-month mark. The more risky scenario — which Tor uses today — sees the risk shoot up to 14% in the first few months. (Note that the y-axis in the graph only goes up to 16%, mostly because the attacking guard is so small.)

The second paper to raise the issue is from Alex Biryukov, Ivan Pustogarov, and Ralf-Philipp Weinmann in Luxembourg. Their paper Trawling for Tor Hidden Services: Detection, Measurement, Deanonymization (published at Oakland 2013) mostly focuses on other attacks (like how to censor or track popularity of hidden services), but their Section VI.C. talks about the "run a relay and wait until the client picks you as her guard" attack. In this case they run the numbers for a much larger adversary: if they run 13.8% of the Tor network for eight months there's more than a 90% chance of a given hidden service using their guard sometime during that period. That's a huge fraction of the network, but it's also a huge chance of success. And since hidden services in this case are basically the same as Tor clients (they choose guards and build circuits the same way), it's reasonable to conclude that their attack works against normal clients too so long as the clients use Tor often enough during that time.

I should clarify three points here.

First clarifying point: Tariq's paper makes two simplifying assumptions when calling an attack successful if the adversary's relay *ever* gets into the user's guard set. 1) He assumes that the adversary is also either watching the user's destination (e.g. the website she's going to), or he's running enough exit relays that he'll for sure be able to see the correponding flow out of the Tor network. 2) He assumes that the end-to-end correlation attack (matching up the incoming flow to the outgoing flow) is instantaneous and perfect. Alex's paper argues pretty convincingly that these two assumptions are easier to make in the case of attacking a hidden service (since the adversary can dictate how often the hidden service makes a new circuit, as well as what the traffic pattern looks like), and the paper I describe next addresses the first assumption, but the second one ("how successful is the correlation attack at scale?" or maybe better, "how do the false positives in the correlation attack compare to the false negatives?") remains an open research question.

Researchers generally agree that given a handful of traffic flows, it's easy to match them up. But what about the millions of traffic flows we have now? What levels of false positives (algorithm says "match!" when it's wrong) are acceptable to this attacker? Are there some simple, not too burdensome, tricks we can do to drive up the false positives rates, even if we all agree that those tricks wouldn't work in the "just looking at a handful of flows" case?

More precisely, it's possible that correlation attacks don't scale well because as the number of Tor clients grows, the chance that the exit stream actually came from a different Tor client (not the one you're watching) grows. So the confidence in your match needs to grow along with that or your false positive rate will explode. The people who say that correlation attacks don't scale use phrases like "say your correlation attack is 99.9% accurate" when arguing it. The folks who think it does scale use phrases like "I can easily make my correlation attack arbitrarily accurate." My hope is that the reality is somewhere in between — correlation attacks in the current Tor network can probably be made plenty accurate, but perhaps with some simple design changes we can improve the situation. In any case, I'm not going to try to tackle that research question here, except to point out that 1) it's actually unclear in practice whether you're done with the attack if you get your relay into the user's guard set, or if you are now faced with a challenging flow correlation problem that could produce false positives, and 2) the goal of the entry guard design is to make this issue moot: it sure would be nice to have a design where it's hard for adversaries to get into a position to see both sides, since it would make it irrelevant how good they are at traffic correlation.

Second clarifying point: it's about the probabilities, and that's intentional. Some people might be scared by phrases like "there's an x% chance over y months to be able to get an attacker's relay into the user's guard set." After all, they reason, shouldn't Tor provide absolute anonymity rather than probabilistic anonymity? This point is even trickier in the face of centralized anonymity services that promise "100% guaranteed" anonymity, when what they really mean is "we could watch everything you do, and we might sell or give up your data in some cases, and even if we don't there's still just one point on the network where an eavesdropper can learn everything." Tor's path selection strategy distributes trust over multiple relays to avoid this centralization. The trouble here isn't that there's a chance for the adversary to win — the trouble is that our current parameters make that chance bigger than it needs to be.

To make it even clearer: the entry guard design is doing its job here, just not well enough. Specifically, *without* using the entry guard design, an adversary who runs some relays would very quickly find himself as the first hop of one of the user's circuits.

Third clarifying point: we're considering an attacker who wants to learn if the user *ever* goes to a given destination. There are plenty of reasonable other things an attacker might be trying to learn, like building a profile of many or all of the user's destinations, but in this case Tariq's paper counts a successful attack as one that confirms (subject to the above assumptions) that the user visited a given destination once.

And that brings us to the third paper, by Aaron Johnson et al: Users Get Routed: Traffic Correlation on Tor by Realistic Adversaries (upcoming at CCS 2013). This paper ties together two previous series of research papers: the first is "what if the attacker runs a relay?" which is what the above two papers talked about, and the second is "what if the attacker can watch part of the Internet?"

The first part of the paper should sound pretty familiar by now: they simulated running a few entry guards that together make up 10% of the guard capacity in the Tor network, and they showed that (again using historical Tor network data, but this time from October 2012 to March 2013) the chance that the user has made a circuit using the adversary's relays is more than 80% by the six month mark.

In this case their simulation includes the adversary running a fast exit relay too, and the user performs a set of sessions over time. They observe that the user's traffic passes over pretty much all the exit relays (which makes sense since Tor doesn't use an "exit guard" design). Or summarizing at an even higher level, the conclusion is that so long as the user uses Tor enough, this paper confirms the findings in the earlier two papers.

Where it gets interesting is when they explain that "the adversary could run a relay" is not the only risk to worry about. They build on the series of papers started by "Location Diversity in Anonymity Networks" (WPES 2004), "AS-awareness in Tor path selection" (CCS 2009), and most recently "An Empirical Evaluation of Relay Selection in Tor" (NDSS 2013). These papers look at the chance that traffic from a given Tor circuit will traverse a given set of Internet links.

Their point, which like all good ideas is obvious in retrospect, is that rather than running a guard relay and waiting for the user to switch to it, the attacker should instead monitor as many Internet links as he can, and wait for the user to use a guard such that traffic between the user and the guard passes over one of the links the adversary is watching.

This part of the paper raises as many questions as it answers. In particular, all the users they considered are in or near Germany. There are also quite a few Tor relays in Germany. How much of their results here can be explained by pecularities of Internet connectivity in Germany? Are their results predictive in any way about how users on other continents would fare? Or said another way, how can we learn whether their conclusion shouldn't instead be "German Tor users are screwed, because look how Germany's Internet topology is set up"? Secondly, their scenario has the adversary control the Autonomous System (AS) or Internet Exchange Point (IXP) that maximally deanonymizes the user (they exclude the AS that contains the user and the AS that contains her destinations). This "best possible point to attack" assumption a) doesn't consider how hard it is to compromise that particular part of the Internet, and b) seems like it will often be part of the Internet topology near the user (and thus vary greatly depending on which user you're looking at). And third, like the previous papers, they think of an AS as a single Internet location that the adversary is either monitoring or not monitoring. Some ASes, like large telecoms, are quite big and spread out.

That said, I think it's clear from this paper that there *do* exist realistic scenarios where Tor users are at high risk from an adversary watching the nearby Internet infrastructure and/or parts of the Internet backbone. Changing the guard rotation parameters as I describe in "Part three" below will help in some of these cases but probably won't help in all of them. The canonical example that I've given in talks about "a person in Syria using Tor to visit a website in Syria" remains a very serious worry.

The paper also makes me think about exit traffic patterns, and how to better protect people who use Tor for only a short period of time: many websites pull in resources from all over, especially resources from centralized ad sites. This risk (that it greatly speeds the rate at which an adversary watching a few exit points — or heck, a few ad sites — will be able to observe a given user's exit traffic) provides the most compelling reason I've heard so far to ship Tor Browser Bundle with an ad blocker — or maybe better, with something like Request Policy that doesn't even touch the sites in the first place. On the other hand, Mike Perry still doesn't want to ship an ad blocker in TBB, since he doesn't want to pick a fight with Google and give them even more of a reason to block/drop all Tor traffic. I can see that perspective too.

Part three: How to fix it

Here are five steps we should take, in rough order of how much impact I think each of them would have on the above attacks.

If you like metaphors, think of each time you pick a new guard as a coin flip (heads you get the adversary's guard, tails you're safe this time), and the ideas here aim to reduce both the number and frequency of coin flips.

Fix 1: Tor clients should use fewer guards.

The primary benefit to moving to fewer guards is that there are fewer coin flips every time you pick your guards.

But there's a second benefit as well: right now your choice of guards acts as a kind of fingerprint for you, since very few other users will have picked the same three guards you did. (This fingerprint is only usable by an attacker who can discover your guard list, but in some scenarios that's a realistic attack.) To be more concrete: if the adversary learns that you have a particular three guards, and later sees an anonymous user with exactly the same guards, how likely is it to be you? Moving to two guards helps the math a lot here, since you'll overlap with many more users when everybody is only picking two.

On the other hand, the main downside is increased variation in performance. Here's Figure 10 from Tariq's paper:

Average guard capacity for various numbers of guards

"Farther to the right" is better in this graph. When you pick three guards (the red line), the average speed of your guards is pretty good (and pretty predictable), since most guards are pretty fast and it's unlikely you'll pick slow ones for all three. However, when you only pick only one guard (the purple line), the odds go up a lot that you get unlucky and pick a slow one. In more concrete numbers, half of the Tor users will see up to 60% worse performance.

The fix of course is to raise the bar for becoming a guard, so every possible guard will be acceptably fast. But then we have fewer guards total, increasing the vulnerability from other attacks! Finding the right balance (as many guards as possible, but all of them fast) is going to be an ongoing challenge. See Brainstorm tradeoffs from moving to 2 (or even 1) guards (ticket 9273) for more discussion.

Switching to just one guard will also preclude deploying Conflux, a recent proposal to improve Tor performance by routing traffic over multiple paths in parallel. The Conflux design is appealing because it not only lets us make better use of lower-bandwidth relays (which we'll need to do if we want to greatly grow the size of the Tor network), but it also lets us dynamically adapt to congestion by shifting traffic to less congested routes. Maybe some sort of "guard family" idea can work, where a single coin flip chooses a pair of guards and then we split our traffic over them. But if we want to avoid doubling the exposure to a network-level adversary, we might want to make sure that these two guards are near each other on the network — I think the analysis of the network-level adversary in Aaron's paper is the strongest argument for restricting the variety of Internet paths that traffic takes between the Tor client and the Tor network.

This discussion about reducing the number of guards also relates to bridges: right now if you configure ten bridges, you round-robin over all of them. It seems wise for us to instead use only the first bridge in our bridge list, to cut down on the set of Internet-level adversaries that get to see the traffic flows going into the Tor network.

Fix 2: Tor clients should keep their guards for longer.

In addition to choosing fewer guards, we should also avoid switching guards so often. I originally picked "one or two months" for guard rotation since it seemed like a very long time. In Tor 0.2.4, we've changed it to "two or three months". But I think changing the guard rotation period to a year or more is probably much wiser, since it will slow down the curves on all the graphs in the above research papers.

I asked Aaron to make a graph comparing the success of an attacker who runs 10% of the guard capacity, in the "choose 3 guards and rotate them every 1-2 months" case and the "choose 1 guard and never rotate" case:

Vulnerability from natural vs intentional guard rotation

In the "3 guard" case (the blue line), the attacker's success rate rapidly grows to about 25%, and then it steadily grows to over 80% by the six month mark. The "1 guard" case (green line), on the other hand, grows to 10% (which makes sense since the adversary runs 10% of the guards), but then it levels off and grows only slowly as a function of network churn. By the six month mark, even this very large adversary's success rate is still under 25%.

So the good news is that by choosing better guard rotation parameters, we can almost entirely resolve the vulnerabilities described in these three papers. Great!

Or to phrase it more as a research question, once we get rid of this known issue, I'm curious how the new graphs over time will look, especially when we have a more sophisticated analysis of the "network observer" adversary. I bet there are some neat other attacks that we'll need to explore and resolve, but that are being masked by the poor guard parameter issue.

However, fixing the guard rotation period issue is alas not as simple as we might hope. The fundamental problem has to do with "load balancing": allocating traffic onto the Tor network so each relay is used the right amount. If Tor clients choose a guard and stick with it for a year or more, then old guards (relays that have been around and stable for a long time) will see a lot of use, and new guards will see very little use.

I wrote a separate blog post to provide background for this issue: "The lifecycle of a new relay". Imagine if the ramp-up period in the graph from that blog post were a year long! People would set up fast relays, they would get the Guard flag, and suddenly they'd see little to no traffic for months. We'd be throwing away easily half of the capacity volunteered by relays.

One approach to resolving the conflict would be for the directory authorities to track how much of the past n months each relay has had the Guard flag, and publish a fraction in the networkstatus consensus. Then we'd teach clients to rebalance their path selection choices so a relay that's been a Guard for only half of the past year only counts 50% as a guard in terms of using that relay in other positions in circuits. See Load balance right when we have higher guard rotation periods (ticket 9321) for more discussion, and see Raise our guard rotation period (ticket 8240) for earlier discussions.

Yet another challenge here is that sticking to the same guard for a year gives plenty of time for an attacker to identify the guard and attack it somehow. It's particularly easy to identify the guard(s) for hidden services currently (since as mentioned above, the adversary can control the rate at which hidden services make new circuits, simply by visiting the hidden service), but similar attacks can probably be made to work against normal Tor clients — see e.g. the http-level refresh tricks in How Much Anonymity does Network Latency Leak? This attack would effectively turn Tor into a network of one-hop proxies, to an attacker who can efficiently enumerate guards. That's not a complete attack, but it sure does make me nervous.

One possible direction for a fix is to a) isolate streams by browser tab, so all the requests from a given browser tab go to the same circuit, but different browser tabs get different circuits, and then b) stick to the same three-hop circuit (i.e. same guard, middle, and exit) for the lifetime of that session (browser tab). How to slow down guard enumeration attacks is a tough and complex topic, and it's too broad for this blog post, but I raise the issue here as a reminder of how interconnected anonymity attacks and defenses are. See Slow Guard Discovery of Hidden Services and Clients (ticket 9001) for more discussion.

Fix 3: The Tor code should better handle edge cases where you can't reach your guard briefly.

If a temporary network hiccup makes your guard unreachable, you switch to another one. But how long is it until you switch back? If the adversary's goal is to learn whether you ever go to a target website, then even a brief switch to a guard that the adversary can control or observe could be enough to mess up your anonymity.

Tor clients fetch a new networkstatus consensus every 2-4 hours, and they are willing to retry non-running guards if the new consensus says they're up again.

But I think there are a series of little bugs and edge cases where the Tor client abandons a guard more quickly than it should. For example, we mark a guard as failed if any of our circuit requests time out before finishing the handshake with the first hop. We should audit both the design and the source code with an eye towards identifying and resolving these issues.

We should also consider whether an adversary can *induce* congestion or resource exhaustion to cause a target user to switch away from her guard. Such an attack could work very nicely coupled with the guard enumeration attacks discussed above.

Most of these problems exist because in the early days we emphasized reachability ("make sure Tor works") over anonymity ("be very sure that your guard is gone before you try another one"). How should we handle this tradeoff between availability and anonymity: should you simply stop working if you've switched guards too many times recently? I imagine different users would choose different answers to that tradeoff, depending on their priorities. It sounds like we should make it easier for users to select "preserve my anonymity even if it means lower availability". But at the same time, we should remember the lessons from Anonymity Loves Company: Usability and the Network Effect about how letting users choose different settings can make them more distinguishable.

Fix 4: We need to make the network bigger.

We've been working hard in recent years to get more relay capacity. The result is a more than four-fold increase in network capacity since 2011:

Vulnerability from natural vs intentional guard rotation

As the network grows, an attacker with a given set of resources will have less success at the attacks described in this blog post. To put some numbers on it, while the relay adversary in Aaron's paper (who carries 660mbit/s of Tor traffic) represented 10% of the guard capacity in October 2012, that very same attacker would have been 20% of the guard capacity in October 2011. Today that attacker is about 5% of the guard capacity. Growing the size of the network translates directly into better defense against these attacks.

However, the analysis is more complex when it comes to a network adversary. Just adding more relays (and more relay capacity) doesn't always help. For example, adding more relay capacity in a part of the network that the adversary is already observing can actually *decrease* anonymity, because it increases the fraction the adversary can watch. We discussed many of these issues in the thread about turning funding into more exit relays. For more details about the relay distribution in the current Tor network, check out Compass, our tool to explore what fraction of relay capacity is run in each country or AS. Also check out Lunar's relay bubble graphs.

Yet another open research question in the field of anonymous communications is how the success rate of a network adversary changes as the Tor network changes. If we were to plot the success rate of the *relay* adversary using historical Tor network data over time, it's pretty clear that the success rate would be going down over time as the network grows. But what's the trend for the success rate of the network adversary over the past few years? Nobody knows. It could be going up or down. And even if it is going down, it could be going down quickly or slowly.

(Read more in Research problem: measuring the safety of the Tor network where I describe some of these issues in more detail.)

Recent papers have gone through enormous effort to get one, very approximate, snapshot of the Internet's topology. Doing that effort retroactively and over long and dynamic time periods seems even more difficult and more likely to introduce errors.

It may be that the realities of Internet topology centralization make it so that there are fundamental limits on how much safety Tor users can have in a given network location. On the other hand, researchers like Aaron Johnson are optimistic that "network topology aware" path selection can improve Tor's protection against this style of attack. Much work remains.

Fix 5: We should assign the guard flag more intelligently.

In point 1 above I talked about why we need to raise the bar for becoming a guard, so all guards can provide adequate bandwidth. On the other hand, having fewer guards is directly at odds with point 4 above.

My original guard rotation parameters blog post ends with this question: what algorithm should we use to assign Guard flags such that a) we assign the flag to as many relays as possible, yet b) we minimize the chance that Alice will use the adversary's node as a guard?

We should use historical Tor network data to pick good values for the parameters that decide which relays become guards. This remains a great thesis topic if somebody wants to pick it up.

Part four: Other thoughts

What does all of this discussion mean for the rest of Tor? I'll close by trying to tie this blog post to the broader Tor world.

First, all three of these papers come from the Tor research community, and it's great that Tor gets such attention. We get this attention because we put so much effort into making it easy for researchers to analyze Tor: we've worked closely with these authors to help them understand Tor and focus on the most pressing research problems.

In addition, don't be fooled into thinking that these attacks only apply to Tor: using Tor is still better than using any other tool, at least in quite a few of these scenarios. That said, some other attacks in the research literature might be even easier than the attacks discussed here. These are fast-moving times for anonymity research. "Maybe you shouldn't use the Internet then" is still the best advice for some people.

Second, the Tails live CD doesn't use persistent guards. That's really bad I think, assuming the Tails users have persistent behavior (which basically all users do). See their ticket 5462.

Third, the network-level adversaries rely on being able to recognize Tor flows. Does that argue that using pluggable transports, with bridges, might change the equation if it stops the attacker from recognizing Tor users?

Fourth, I should clarify that I don't think any of these large relay-level adversaries actually exist, except as a succession of researchers showing that it can be done. (GCHQ apparently ran a small number of relays a while ago, but not in a volume or duration that would have enabled this attack.) Whereas I *do* think that the network-level attackers exist, since they already invested in being able to surveil the Internet for other reasons. So I think it's great that Aaron's paper presents the dual risks of relay adversaries and link adversaries, since most of the time when people are worrying about one of them they're forgetting the other one.

Fifth, there are still some ways to game the bandwidth authority measurements (here's the spec spec) into giving you more than your fair share of traffic. Ideally we'd adapt a design like EigenSpeed so it can measure fast relays both robustly and accurately. This question also remains a great thesis topic.

And finally, as everybody wants to know: was this attack how "they" busted recent hidden services (Freedom Hosting, Silk Road, the attacks described in the latest Guardian article)? The answer is apparently no in each case, which means the techniques they *did* use were even *lower* hanging fruit. The lesson? Security is hard, and you have to get it right at many different levels.

The lifecycle of a new relay

Many people set up new fast relays and then wonder why their bandwidth is not fully loaded instantly. In this post I'll walk you through the lifecycle of a new fast non-exit relay, since Tor's bandwidth estimation and load balancing has gotten much more complicated in recent years. I should emphasize that the descriptions here are in part anecdotal — at the end of the post I ask some research questions that will help us make better sense of what's going on.

I hope this summary will be useful for relay operators. It also provides background for understanding some of the anonymity analysis research papers that people have been asking me about lately. In an upcoming blog post, I'll explain why we need to raise the guard rotation period (and what that means) to improve Tor's anonymity. [Edit: here is that blog post]

A new relay, assuming it is reliable and has plenty of bandwidth, goes through four phases: the unmeasured phase (days 0-3) where it gets roughly no use, the remote-measurement phase (days 3-8) where load starts to increase, the ramp-up guard phase (days 8-68) where load counterintuitively drops and then rises higher, and the steady-state guard phase (days 68+).


The phases of a new relay

Phase one: unmeasured (days 0-3).

When your relay first starts, it does a bandwidth self-test: it builds four circuits into the Tor network and back to itself, and then sends 125KB over each circuit. This step bootstraps Tor's passive bandwidth measurement system, which estimates your bandwidth as the largest burst you've done over a 10 second period. So if all goes well, your first self-measurement is 4*125K/10 = 50KB/s. Your relay publishes this number in your relay descriptor.

The directory authorities list your relay in the network consensus, and clients get good performance (and balance load across the network) by choosing relays proportional to the bandwidth number listed in the consensus.

Originally, the directory authorities would just use whatever bandwidth estimate you claimed in your relay descriptor. As you can imagine, that approach made it cheap for even a small adversary to attract a lot of traffic by simply lying. In 2009, Mike Perry deployed the "bandwidth authority" scripts, where a group of fast computers around the Internet (called bwauths) do active measurements of each relay, and the directory authorities adjust the consensus bandwidth up or down depending on how the relay compares to other relays that advertise similar speeds. (Technically, we call the consensus number a "weight" rather than a bandwidth, since it's all about how your relay's number compares to the other numbers, and once we start adjusting them they aren't really bandwidths anymore.)

The bwauth approach isn't ungameable, but it's a lot better than the old design. Earlier this year we plugged another vulnerability by capping your consensus weight to 20KB until a threshold of bwauths have an opinion about your relay — otherwise there was a several-day window where we would use your claimed value because we didn't have anything better to use.

So that's phase one: your new relay gets basically no use for the first few days of its life because of the low 20KB cap, while it waits for a threshold of bwauths to measure it.

Phase two: remote measurement (days 3-8).

Remember how I said the bwauths adjust your consensus weight based on how you compare to similarly-sized relays? At the beginning of this phase your relay hasn't seen much traffic, so your peers are the other relays who haven't seen (or can't handle) much traffic. Over time though, a few clients will build circuits through your relay and push some traffic, and the passive bandwidth measurement will provide a new larger estimate. Now the bwauths will compare you to your new (faster) peers, giving you a larger consensus weight, thus driving more clients to use you, in turn raising your bandwidth estimate, and so on.

Tor clients generally make three-hop circuits (that is, paths that go through three relays). The first position in the path, called the guard relay, is special because it helps protect against a certain anonymity-breaking attack. Here's the attack: if you keep picking new paths at random, and the adversary runs a few relays, then over time the chance drops to zero that *every single path you've made* is safe from the adversary. The defense is to choose a small number of relays (called guards) and always use one of them for your first hop — either you chose wrong, and one of your guards is run by the adversary and you lose on many of your paths; or you chose right and all of your paths are safe. Read the Guard FAQ for more details.

Only stable and reliable relays can be used as guards, so no clients are willing to use your brand new relay as their first hop. And since in this story you chose to set up a non-exit relay (so you won't be the one actually making connections to external services like websites), no clients will use it as their third hop either. That means all of your relay's traffic is coming from being the second hop in circuits.

So that's phase two: once the bwauths have measured you and the directory authorities lift the 20KB cap, you'll attract more and more traffic, but it will still be limited because you'll only ever be a middle hop.

Phase three: Ramping up as a guard relay (days 8-68).

This is the point where I should introduce consensus flags. Directory authorities assign the Guard flag to relays based on three characteristics: "bandwidth" (they need to have a large enough consensus weight), "weighted fractional uptime" (they need to be working most of the time), and "time known" (to make attacks more expensive, we don't want to give the Guard flag to relays that haven't been around a while first). This last characteristic is most relevant here: on today's Tor network, you're first eligible for the Guard flag on day eight.

Clients will only be willing to pick you for their first hop if you have the "Guard" flag. But here's the catch: once you get the Guard flag, all the rest of the clients back off from using you for their middle hops, because when they see the Guard flag, they assume that you have plenty of load already from clients using you as their first hop. Now, that assumption will become true in the steady-state (that is, once enough clients have chosen you as their guard node), but counterintuitively, as soon as you get the Guard flag you'll see a dip in traffic.

Why do clients avoid using relays with the Guard flag for their middle hop? Clients look at the scarcity of guard capacity, and the scarcity of exit capacity, and proportionally avoid using relays for positions in the path that aren't scarce. That way we allocate available resources best: relays with the Exit flag are used mostly for exiting when they're scarce, and relays with the Guard flag are used mostly for entering when they're scarce.

It isn't optimal to allow this temporary dip in traffic (since we're not taking advantage of resources that you're trying to contribute), but it's a short period of time overall: clients rotate their guards nodes every 4-8 weeks, so pretty soon some of them will rotate onto your relay.

To be clear, there are two reasons why we have clients rotate their guard relays, and the reasons are two sides of the same coin: first is the above issue where new guards wouldn't see much use (since only new clients, picking their guards for the first time, would use a new guard), and second is that old established guards would accrue an ever-growing load since they'd have the traffic from all the clients that ever picked them as their guard.

One of the reasons for this blog post is to give you background so when I later explain why we need to extend the guard rotation period to many months, you'll understand why we can't just crank up the number without also changing some other parts of the system to keep up. Stay tuned for more details there, or if you don't want to wait you can read the original research questions and the followup research papers by Elahi et al and Johnson et al.

Phase four: Steady-state guard relay (days 68+).

Once your relay has been a Guard for the full guard rotation period (up to 8 weeks in Tor 0.1.1.11-alpha through 0.2.4.11-alpha, and up to 12 weeks in Tor 0.2.4.12-alpha and later), it should reach steady-state where the number of clients dropping it from their guard list balances the number of clients adding it to their guard list.

Research question: what do these phases look like with real-world data?

All of the above explanations, including the graphs, are just based on anecdotes from relay operators and from ad hoc examination of consensus weights.

Here's a great class project or thesis topic: using our publically available Tor network metrics data, track the growth pattern of consensus weights and bandwidth load for new relays. Do they match the phases I've described here? Are the changes inside a given phase linear like in the graphs above, or do they follow some other curve?

Are there trends over time, e.g. it used to take less time to ramp up? How long are the phases in reality — for example, does it really take three days before the bwauths have a measurement for new relays?

How does the scarcity of Guard or Exit capacity influence the curves or trends? For example, when guard capacity is less scarce, we expect the traffic dip at the beginning of phase three to be less pronounced.

How different were things before we added the 20KB cap in the first phase, or before we did remote measurements at all?

Are the phases, trends, or curves different for exit relays than for non-exit relays?

The "steady-state" phase assumes a constant number of Tor users: in situations where many new users appear (like the botnet invasion in August 2013), current guards will get unbalanced. How quickly and smoothly does the network rebalance as clients shift guards after that?

Transparency, openness, and our 2012 financial docs

After completing the standard audit, our 2012 state and federal tax filings are available. Our 2012 Annual Report is also available. We publish all of our related tax documents because we believe in transparency. All US non-profit organizations are required by law to make their tax filings available to the public on request by US citizens. We want to make them available for all.

Part of our transparency is simply publishing the tax documents for your review. The other part is publishing what we're working on in detail. We hope you'll join us in furthering our mission (a) to develop, improve and distribute free, publicly available tools and programs that promote free speech, free expression, civic engagement and privacy rights online; (b) to conduct scientific research regarding, and to promote the use of and knowledge about, such tools, programs and related issues around the world; (c) to educate the general public around the world about privacy rights and anonymity issues connected to Internet use.

All of this means you can look through our source code, including our design documents, and all open tasks, enhancements, and bugs available on our tracking system. Our research reports are available as well. From a technical perspective, all of this free software, documentation, and code allows you and others to assess the safety and trustworthiness of our research and development. On another level, we have a 10 year track record of doing high quality work, saying what we're going to do, and doing what we said.

Internet privacy and anonymity is more important and rare than ever. Please help keep us going through getting involved, donations, or advocating for a free Internet with privacy, anonymity, and keeping control of your identity.

Tor's Response to Prism Surveillance Program

Due to several requests received today from members of the press community and others we felt it was in the best interest of time and consistency to provide a statement regarding today's developments and stories surrounding the NSA Prism surveillance program.

The Tor Project is a nonprofit 501(c)(3) organization dedicated to providing tools to help people manage their privacy on the Internet. Beyond our free, open source technology and extensive research we actively foster important conversations with many global organizations in order to help people around the world understand the value of privacy and anonymity online. As a result, members of the core Tor team and the greater Tor community are out in the world sharing knowledge and insights with countless individuals every day - many times handing out free Tor stickers; with no donation requested or expected. Edward Snowden, like tens of thousands of people, put Tor stickers on their devices. He likely got it at a conference from one of us in the past year.

Today, as always, the team at Tor remains committed to building innovative, sustainable technology solutions to help keep the doors to freedom of expression open.

For more on our view on this situation visit also our blog post:
https://blog.torproject.org/blog/prism-vs-tor.

For further questions please contact us at execdir@torproject.org.

A weekend at New England Give Camp

Trip Report for New England Give Camp 2013

I spent the entire weekend with New England Give Camp at Microsoft Research in Cambridge, MA. I was one of the non-profits, representing ipv tech, Tor, and offering myself as a technical volunteer to help out other non-profits. Over the 48 hours, here's what I helped out doing:

  • Transition House
    • Help evaluate their IT systems
    • Look at, reverse engineer, and fix their Alice database system
  • Emerge
    • Update their wordpress installation
    • Help fix the rotating images on the site
  • ipv tech
    • Hack on fuerza app
    • Get fuerza into a git repo, now here at gitorious
    • rewrite the app to be markdown and static files to work offline
  • Children's Charter
    • Help resurrect their hacked WordPress installation and build them a new site.

I also did a 30 minute talk about technology and intimate partner violence. Over the past few years, I've seen every possible technology used to stalk, harass, and abuse people--and those that help them. I'm helping the victims and advocates use the same technologies to empower the victims and turn the tables on the abusers in most cases. The ability to be anonymous and be free from surveillance for once, even for an hour, is cherished by the victims and affected advocates.

Our team was great. Kevin, Paul, John, Bob, Carmine, Adam, and Sarah did a great job at keeping motivated, making progress, and joking along the way. Microsoft, Whole Foods, and a slew of sponsors offered endless food, sugary drinks, beautiful views, and encouragement throughout the weekend.

Cambridge Community Television interviewed me at the very end of the event. There's also a Flickr group full of pictures.

Overall it was a great experience. I encourage you to volunteer next year.

Transparency, openness, and our 2011 financial docs

After our standard audit, our 2011 state and federal tax filings are available. We publish all of our related tax documents because we believe in transparency. All US non-profit organizations are required by law to make their tax filings available to the public on request by US citizens. We want to make them available for all.

Part of our transparency is simply publishing the tax documents for your review. The other part is publishing what we're working on in detail. We hope you'll join us in furthering our mission (a) to develop, improve and distribute free, publicly available tools and programs that promote free speech, free expression, civic engagement and privacy rights online; (b) to conduct scientific research regarding, and to promote the use of and knowledge about, such tools, programs and related issues around the world; (c) to educate the general public around the world about privacy rights and anonymity issues connected to Internet use.

All of this means you can look through all of our source code, including our design documents, and all open tasks, enhancements, and bugs available on our tracking system. Our research reports are available as well. From a technical perspective, all of this free software, documentation, and code allows you and others to assess the safety and trustworthiness of our research and development. On another level, we have a 10 year track record of doing high quality work, saying what we're going to do, and doing what we said.

The world is moving towards new norms for reduced personal privacy and control. This makes anonymity all that more rare and valuable. Please help keep us going through getting involved, donations, or advocating for a free Internet with privacy, anonymity, and keeping control of your identity.

Protecting bridge operators from probing attacks

Although Tor was originally designed to enhance privacy, the network is increasingly being used to provide access to websites from countries where Internet connectivity is filtered. To better support this goal, Tor introduced the feature of bridge nodes – Tor nodes which route traffic but are not published in the list of available Tor nodes. Consequently bridges are commonly not blocked in countries where access to the public Tor nodes are (with the exception of China and Iran).

For bridge nodes to work well, we need lots of them. This is both to make it hard to block them all, and also to provide enough bandwidth capacity for all their users. To help achieve this goal, the Tor Browser Bundle now makes it as easy as clicking a button to turn a normal Tor client into a bridge. The upcoming version of Tor will even set up port forwarding on home routers (through UPnP and NAT-PMP) to make it easier still.

There are, however, potential risks to Tor users if they turn their client into a bridge. These were summarized in a paper by Jon McLachlan and Nicholas Hopper: "On the risks of serving whenever you surf". I discussed these risks in a previous blog post, along with ways to mitigate them. In this blog post I'll cover some initiatives the Tor project is working on (or considering), which could further reduce the risks.

The attack than McLachlan and Hopper propose involves finding a list of bridges, and then probing them over time. If the bridge responds at all, then we know that the bridge operator could be surfing the web though Tor. So, if the attacker suspects that a blogger is posting through Tor and the blogger's Tor client is also a bridge, then only the bridges which are running at the time a blog post was written could belong to the blogger. There are likely to be quite a few such bridges, but if the attack is repeated, the set of potential bridge IP addresses will be narrowed down.

What is more, the paper also proposes measuring how quickly each bridge responds, which might tell the attacker something about how heavily the bridge operator is using their Tor client. If the attacker probes the rest of the Tor network, and sees a node with performance patterns similar to the bridge being probed, then it is likely that there is a connection going through that node and that bridge. In some circumstances this attack could be used to track connections all the way through the Tor network, and back to a client who is also acting as a bridge.

One way of reducing the risk of these attacks succeeding is to run the bridge on a device which is always on. In this case, just because a bridge is running doesn't mean the operator is using Tor. Therefore less information is leaked to a potential attacker. As laptops become more prevalent, keeping a bridge on 24/7 can be inconvenient so the Tor Project is working on two bits of hardware (known as Torouters) which can be left running a bridge even if the user's computer is off. If someone probes the bridge now, they can't learn much about whether the operator is using Tor as a client, and so it leaks less information about what the user is doing.

Having an always-on bridge helps resist profiling based on when a client is on. However, evaluating the risk of profiling based on bridge performance is more complex. The reason this attack works is that when a node is congested, increasing traffic on one connection (say, due to the bridge operator using Tor) decreases the traffic on another (say, the attacker probing the bridge). If you run a Tor client on your PC, and a Tor bridge on a separate device, this reduces the opportunity for congestion on one leading to congestion on the other. However, there is still the possibility for congestion on the network connection which the bridge and device share, and maybe you do want to use the bridge device as a client too. So the Torouter helps, but doesn't fix the problem.

To fully decouple bridges from clients, they need to run on different hardware and networks. Here the Tor Cloud project is ideal – bridges run on Amazon's (or another cloud provider's) infrastructure so congestion on the bridge can't affect the Tor client running on your PC, or it's network.

But maybe you do want to use your bridge as a client for whatever reason. Recall that the first step of the attack proposed by McLachlan and Hopper is to find the bridges' IP addresses. This is becoming increasingly difficult as more bridges are being set up. Also, there is work in progress to make it harder to scan networks for bridges. Originally this was intended to make it harder to find and block bridges, but it applies equally well to preventing probing. These schemes (e.g. BridgeSPA by Smits et. al.) mean that just because you have a bridge's IP address, it doesn't mean that you can route traffic through it (to measure performance) or even check if the bridge is running. To do either, you need to have a secret which is distributed only to authorized users of the bridge.

There is still more work to be done in protecting bridge operators, but the projects discussed above (and in the previous blog post) certainly improve the situation. Work continues both in academia and at the Tor Project to better understand the problem and develop further fixes.

Experimental Defense for Website Traffic Fingerprinting

Updated 09/05/2011: Add link to actual implementation patch on gitweb

Website fingerprinting is the act of recognizing web traffic through surveillance despite the use of encryption or anonymizing software. The general idea is to leverage the fact that many web sites have specific fixed request patterns and response byte counts that are known beforehand. This information can be used to recognize your web traffic despite attempts at encryption or tunneling. Websites that have an abundance of static content and a fixed request structure tend to be vulnerable to this type of surveillance. Unfortunately, there is enough static content on most websites for this to be the case.

Early work was quick to determine that simple packet-based encryption schemes (such as wireless and/or VPN encryption) were insufficient to prevent recognition of traffic patterns created by popular websites in the encrypted stream. Later, a small-scale study determined that a lot of information could be extracted from HTTPS streams using these same approaches against specific websites.

Despite these early results, whenever researchers tried naively applying these techniques to Tor-like systems, they failed to come up with publishable results (meaning the attack did not work against Tor), due largely to the fixed 512 byte cell size, as well as the multiplexing of Tor client traffic over a single TLS connection.

However, last month, a group of researchers succeeded in performing this attack where the others had failed. Their success hinged largely on their use of a simplified yet well-chosen feature set for training their classifiers. Where other researchers simply dumped packet sizes and timings into their classifiers and unsurprisingly got poor results against Tor traffic, this group extracted the time, quantity and direction of traffic, and discarded irrelevant control information such as TCP ACKs.

While the research methodology of the attack side of their work is particularly exemplary (certainly the most thorough study in website fingerprinting to date), their results are still likely insufficient to deploy against the network and expect to catch people engaging in single visits of forbidden websites. For such a use case, even their relatively low false positive rate is going to cause a lot of issues when deployed against large quantities of traffic. Concurrent use of multiple AJAX-enabled websites and/or other applications will also frustrate an attacker. Even without concurrent activity, their "open-world" experiment (the most realistic scenario for dragnet surveillance of Tor traffic) shows true positive accuracy of around 55%.

However, repeated observations over long periods of time are likely sufficient to develop certainty. It is also possible that extracting actual application data lengths from Tor TLS headers would add additional accuracy.

Hence, this is a rather nasty attack. It is basically a one-ended version of the age-old end-to-end correlation attack (where an adversary attempts to observe both the entrance to the network and the exits to correlate traffic flows). With the website fingerprinting attack, the adversary only needs to observe a single entry (a bridge, a guard node, or a regional firewall) of the network to begin gathering information about users who use that entry point.

However, because the attack is only one-ended, many defenses that would be useless against the full end-to-end attack can be considered. In the countermeasures section of their paper, the researchers point out that a Firefox addon that simply performs background HTTP requests concurrent to normal user activity was enough to foil their classifier.

We disagree with the background fetch approach because it seems that a slightly more sophisticated attack would train a separate classifier to recognize the background cover traffic and then subtract it before attempting to classify the remainder. In the face of this concern, it seems that the background request defense is not worth the additional network load until it can be further studied in detail.

Instead, we are deploying an experimental defense in today's Tor Browser Bundle release that is specifically designed to reduce the information available for feature extraction without adding overhead. The defense is to enable HTTP pipelining, and to randomize the pipeline size as well as the order of requests. The source code to the implementation can be viewed on gitweb.

Since normal, non-randomized pipelining is still off by default to this day in Firefox, we are assuming that the published attack results are against serialized request/response behavior, which provides significantly more feature information to the attacker. In particular, we believe a randomized pipeline will eliminate or reduce the utility of the 'Size Marker', 'Number Marker', 'Number of Packets', and 'Occurring Packet Sizes' features on sites that support pipelining, due to the batching of requests and responses of arbitrary sizes. More generally, the randomized pipeline should obscure the request vs response size and request ordering information available to the classifier.

Our hope is that the randomized pipeline defense will therefore increase the duration of observation required to establish certainty that a site is being visited, by lowering the true positive rate and/or raising the false positive rate beyond what the researchers observed.

We do not expect this defense to be foolproof. We create it as a prototype, and request that future research papers do not treat the defense as if it were the final solution against website fingerprinting of Tor traffic. In particular, not all websites support pipelining (in fact, an unknown number may deliberately disable it to reduce load), and even those that do will still leak the initial response size as well as the total response size to the attacker. Pipelining may also be disabled by malicious or simply misconfigured exits.

However, the defense could also be improved. We did not attempt to determine the optimal pipeline size or distribution, and are relying on the research community to tweak these parameters as they evaluate the defense.

We could also take more extreme measures, such as building pipelining support into Tor exit nodes. Perhaps better still, we could deploy an HTTP to SPDY translation service at exits. The more efficient request bundling of SPDY would likely obscure request vs response size information yet further. However, as these translations are potentially fragile as well as labor-intensive to implement and deploy, we are unlikely to take these measures without further feedback from and study by the research community.

Alternatively, or perhaps additionally, defenses could be deployed at the obfsproxy plugin layer. Defenses there would not help against an adversary at the bridge or guard node, but would help against regional firewalls.

We would love to hear feedback from the research community about these approaches, and look forward to hearing more results of future attack and defense work along these and other avenues.

Research problem: better guard rotation parameters

Tor's entry guard design protects users in a variety of ways.

First, they protect against the "predecessor attack": if you choose new relays for each circuit, eventually an attacker who runs a few relays will be your first and last hop. With entry guards, the risk of end-to-end correlation for any given circuit is the same, but the cumulative risk for all your circuits over time is capped.

Second, they help to protect against the "denial of service as denial of anonymity" attack, where an attacker who runs quite a few relays fails any circuit that he's a part of and that he can't win against, forcing Alice (the user) to generate more circuits and thus increasing the overall chance that the attacker wins. Entry guards greatly reduce the risk, since Alice will never choose outside of a few nodes for her first hop.

Third, entry guards raise the startup cost to an adversary who runs relays in order to trace users. Without entry guards, the attacker can sign up some relays and immediately start having chances to observe Alice's circuits. With them, new adversarial relays won't have the Guard flag so won't be chosen as the first hop of any circuit; and even once they earn the Guard flag, users who have already chosen guards won't switch away from their current guards for quite a while.

But how long exactly? The first research question here examines vulnerability due to natural churn of entry guards. Consider an adversary who runs one entry guard with advertised capacity C. Alice maintains an ordered list of guards (chosen at random, weighted by advertised speed, from the whole list of possible guards). For each circuit, she chooses her first hop (uniformly at random) from the first G guards from her list that are currently running (appending a new guard to the list as needed, but going back to earlier guards if they reappear). How long until Alice adds the adversary's node to her guard list? You can use the uptime data from the directory archives, either to build a model for guard churn or just to use the churn data directly. Assuming G=3, how do the results vary by C? What's the variance?

Research question two: consider intentional churn due to load balancing too. Alice actually discards each guard between 30 and 60 days (uniformly chosen) after she first picks it. This intentional turnover prevents long-running guards from accumulating more and more users and getting overloaded. Another way of looking at it is that it shifts load to new guards so we can make better use of them. How much does this additional churn factor impact your answer from step one above? Or asked a different way, what fraction of Alice's vulnerability to the adversary's entry guard comes from natural churn, and what fraction from the proactive expiration? How does the answer change for different expiry intervals (e.g. between 10 and 30 days, or between 60 and 90)?

Research question three: how do these answers change as we vary G? Right now we choose from among the first three guards, to reduce the variance of expected user performance -- if we always picked the first guard on the list, and some users picked a low-capacity or highly-congested relay, none of that user's circuits would perform well. That said, if choosing G=1 is a huge win for security, we should work on other ways to reduce variance.

Research question four: how would these answers change if we make the cutoffs for getting the Guard flag more liberal, and/or change how we choose what nodes become guards? After all, Tor's anonymity is based on the diversity of entry and exit points, and while it may be tough to get around exit relay scarcity, my theory is that our artificial entry point scarcity (because our requirements are overly strict) is needlessly hurting the anonymity Tor can offer.

Our current algorithm for guard selection has three requirements:
1) The relay needs to have first appeared longer ago than 12.5% of the relays, or 8 days ago, whichever is shorter.
2) The relay needs to advertise at least the median bandwidth in the network, or 250KB/s, whichever is smaller.
3) The relay needs to have at least the median weighted-fractional-uptime of relays in the network, or 98% WFU, whichever is smaller. (For WFU, the clock starts ticking when we first hear about the relay; we track the percentage of that time the relay has been up, discounting values by 95% every 12 hours.)

Today's guard cutoffs in practice are "was first sighted at least 8 days ago, advertises 100KB/s of bandwidth, and has 98% WFU."

Consider two relays, A and B. Relay A first appeared 30 days ago, disappeared for a week, and has been consistently up since then. It has a WFU (after decay, give or take a fencepost) of 786460 seconds / 824195 = 95.4%, so it's not a guard. Relay B appeared two weeks ago and has been up since then. Its WFU is 658517 / 658517 = 100%, so it gets the Guard flag -- even though it has less uptime, and even less weighted uptime. Based on the answers to the first three research questions, which relay would serve Alice best as a guard?

The big-picture tradeoff to explore is: what algorithm should we use to assign Guard flags such that a) we assign the flag to as many relays as possible, yet b) we minimize the chance that Alice will use the adversary's node as a guard?

Torbutton 1.4.0 Released

Torbutton 1.4.0 has been released at:
https://www.torproject.org/torbutton/

The addon has been disabled on addons.mozilla.org. Our URL is now
canonical.

This release features support for Firefox 5.0, and has been tested
against the vanilla release for basic functionality. However, it has
not been audited for Network Isolation, State Separation, Tor
Undiscoverability or Interoperability issues[1] due to toggling under
Firefox 5.

If you desire Torbutton functionality with Firefox 4/5, we recommend
you download the Tor Browser Bundle 2.2.x alphas from
https://www.torproject.org/dist/torbrowser/ or run Torbutton in its
own separate Firefox profile.

The reasons for this shift are explained here:
https://blog.torproject.org/blog/toggle-or-not-toggle-end-torbutton

If you find bugs specific to Firefox 5, toggling, and/or extension
conflicts, file them under the component "Torbutton":
https://trac.torproject.org/projects/tor/report/14

Bugs that still apply to Tor Browser should be filed under component
"TorBrowserButton":
https://trac.torproject.org/projects/tor/report/39

Bugs in the "Torbutton" component currently have no maintainer
available to fix them. Feel free to step up.

(No, simply mis-filing your Torbutton toggle bugs under
TorBrowserButton won't cause them to get fixed accidentally. It will
just annoy me slightly as I relocate them to the correct component).

Here is the complete changelog:
* bug 3101: Disable WebGL. Too many unknowns for now.
* bug 3345: Make Google Captcha redirect work again.
* bug 3399: Fix a reversed exception check found by arno.
* bug 3177: Update torbutton to use new TorBrowser prefs.
* bug 2843: Update proxy preferences window to support env var.
* bug 2338: Force toggle at startup if tor is enabled
* bug 3554: Make Cookie protections obey disk settings
* bug 3441: Enable cookie protection UI by default.
* bug 3446: We're Firefox 5.0, we swear.
* bug 3506: Remove window resize event listener.
* bug 1282: Set fixed window size for each new window.
* bug 3508: Apply Stanford SafeCache patch (thanks Edward, Collin et al).
* bug 2361: Make about window work again on FF4+.
* bug 3436: T(A)ILS was renamed to Tails.
* bugfix: Fix a transparent context menu issue on Linux FF4+.
* misc: Squelch exception from app launcher in error console.
* misc: Make DuckDuckGo the default Google Captcha redirect destination.
* misc: Make it harder to accidentally toggle torbutton.

1. https://www.torproject.org/torbutton/en/design/#requirements

Syndicate content Syndicate content