Archive

TorBirdy: 0.1.2 - Our third beta release!

TorBirdy 0.1.2 is out! All users are encouraged to upgrade as soon as possible, especially if you are using Thunderbird 24.

Notable changes in this release include:

0.1.2, 04 Nov 2013

  • New options:
    • restore default TorBirdy settings
    • toggle checking of new messages automatically for all accounts
  • The minimum version of Thunderbird we now support is 10.0 (closes #9569)
  • `--throw-keyids' is now disabled by default (closes #9648)
  • We are no longer forcing Thunderbird updates (closes #8341)
  • Add support for Thunderbird 24 (Gecko 17+) (closes #9673)
  • Enhanced support for Thunderbird chat
  • We have a new TorBirdy logo. Thanks to Nima Fatemi!
  • Improved documentation:
  • Add new translations and updated existing ones
    • Please see the Transifex page for more information and credits

We offer two ways to install TorBirdy -- either by visiting our website (sig) or by visiting the Mozilla Add-ons page for TorBirdy. Note that there may be a delay -- which can range from a few hours to days -- before the extension is reviewed by Mozilla and updated on the Add-ons page.

As a general anonymity and security note: we are still working on two known anonymity issues with Mozilla. Please make sure that you read the Before Using TorBirdy and Known TorBirdy Issues sections on the wiki before using TorBirdy.

We had love help with translations, programming or anything that you think will improve TorBirdy!

New Tor Browser Bundles with Firefox 17.0.10esr

Firefox 17.0.10esr has been released with several security fixes and all of the Tor Browser Bundles have been updated. All users are encouraged to upgrade.

https://www.torproject.org/projects/torbrowser.html.en#downloads

Tor Browser Bundle (2.3.25-14)

  • Update Firefox to 17.0.10esr
    https://www.mozilla.org/security/known-vulnerabilities/firefoxESR.html#f...
  • Update LibPNG to 1.6.6
  • Update NoScript to 2.6.8.4
  • Update HTTPS-Everywhere to 3.4.2
  • Firefox patch changes:
    • Hide infobar for missing plugins. (closes: #9012)
    • Change the default entry page for the addons tab to the installed addons page. (closes: #8364)
    • Make flash objects really be click-to-play if flash is enabled. (closes: #9867)
    • Make getFirstPartyURI log+handle errors internally to simplify caller usage of the API. (closes: #3661)
    • Remove polipo and privoxy from the banned ports list. (closes: #3661)
    • misc: Fix a potential memory leak in the Image Cache isolation
    • misc: Fix a potential crash if OS theme information is ever absent

Tor Browser Bundle (2.4.17-rc-1)

  • Update Firefox to 17.0.10esr
    https://www.mozilla.org/security/known-vulnerabilities/firefoxESR.html#f...
  • Update LibPNG to 1.6.6
  • Update NoScript to 2.6.8.4
  • Downgrade HTTPS-Everywhere to 3.4.2 in preparation for this becoming the stable bundle
  • Firefox patch changes:
    • Hide infobar for missing plugins. (closes: #9012)
    • Change the default entry page for the addons tab to the installed addons page. (closes: #8364)
    • Make flash objects really be click-to-play if flash is enabled. (closes: #9867)
    • Make getFirstPartyURI log+handle errors internally to simplify caller usage of the API. (closes: #3661)
    • Remove polipo and privoxy from the banned ports list. (closes: #3661)
    • misc: Fix a potential memory leak in the Image Cache isolation
    • misc: Fix a potential crash if OS theme information is ever absent

Tor Weekly News — October 30th, 2013

Welcome to the eighteenth issue of Tor Weekly News, the weekly newsletter that covers what is happening in the Tor community.

A few highlights from this year’s Google Summer of Code

The Google Summer of Code 2013 program is over since the end of September. While Nick, Moritz and Damian attended the GSoC Mentor Summit at Google’s main campus last week, here are a few highlights from three of the five projects that were carried through the summer.

Robert worked on enhancing Tor’s Path Selection algorithm. The enhancement uses active measurements of the Round-Trip-Time of Tor circuits. Rejecting the slowest circuits will improve the average latency of Tor circuits. The results of this work will hopefully be integrated into Tor 0.2.5.x and should then be usable by users. Robert wrote: “Working with the Tor community is very encouraging since there are highly skilled and enthusiastic people around. I am really happy to have made that decision and can definitely recommend doing so to others.”

Johannes Fürmann created a censorship simulation tool that facilitates testing of applications in a simulated network which can be configured and extended to behave like censorship infrastructure in various countries. EvilGenius can be used to do automated “smoke testing”, i.e. find out if code still works properly if a node in the network manipulates traffic in different ways. Other than that, it can be used to automatically test decentralized network applications. “Overall, working with Tor was a great experience and I hope to be able to work with the Tor community again” said Johannes.

Kostas Jakeliūnas worked on creating a searchable and scalable Tor Metrics data archive. This required implementing a Tor relay consensus and descriptor search backend that can encompass most of the archival data available (as of now, the currently running backend covers relays from 2008 up until now).

Those curious to browse Tor relay archives — searching for a needle in the very large haystack or just looking around — might enjoy playing with the current test platform. It can run powerful queries on the large dataset without query parameter/span restrictions. Many use cases are supported — for example, since the newest consensus data is always available, the backend can be used in an ExoneraTor-like fashion.

Together with Karsten Loesing, Kostas hopes to integrate this system with the current Onionoo, hopefully further empowering (and eventually simplifying) the overall Tor Metrics ecosystem.

Kostas described the project as “an interesting and challenging one — a lot of work […] to make it robust and truly scalable.” He also added: “Working with Tor was a great experience: I found the developer community to be welcoming indeed, comprised of many people who are professionals in their field. It should be noted that where there are interesting problems and a clear cause, great people assemble.”

Collecting data against network level adversaries

“The anonymity of a connection over Tor is vulnerable to an adversary who can observe it in enough places along its route. For example, traffic that crosses the same country as it enters and leaves the Tor network can potentially be deanonymized by an authority in that country who can monitor all network communication.” Karsten Loesing, Anupam Das, and Nikita Borisov began their call for help to Tor relay operators by stating a problem that has recently attracted some interest by the research community.

The question “which part of the Internet does a Tor relay lie in” is easy enough to answer, but “determining routes with high confidence has been difficult“ so far. The best source of information could come from the relay operators, as Karsten et al. wrote: “To figure out where traffic travels from your relay, we’d like you to run a bunch of ‘traceroutes’ — network measurements that show the paths traffic takes.”

This one-time experiment — for now — is meant to be used by “several researchers, but the leads are Anupam Das, a Ph.D. student at the University of Illinois at Urbana-Champaign, and his advisor Nikita Borisov.”

In order to participate, shell scripts are available which automate most of the process. They have been reviewed with care from several members of the Tor community and are available from a Git repository. Since their initial email, Anupam Das has assembled a FAQ regarding scope, resource consumption, and other topics.

Be sure to run the scripts if you can. As Karsten, Anupam, and Nikita concluded, “with your help, we will keep improving to face the new challenges to privacy and freedom online.”

Tor Help Desk Roundup

Using the Tor Browser Bundle is still proving to be tricky for many Ubuntu users who upgraded from Ubuntu 13.04 to 13.10. The commonly reported error is that users cannot enter text in any of the browser’s text fields, including the URL and search bars. So far this problem appears to be resolved by removing ibus with apt-get before running the Tor Browser. Users who need ibus can try running `export GTK_IM_MODULE=xim`, as documented in Trac ticket #9353.

Miscellaneous news

David Goulet is asking for a final round of reviews of his rewrite of torsocks so it can replace the old implementation. Lunar has updated the package in Debian experimental to encourage testing. A few portability bugs and a deadlock  has already been ironed out in the process.

The next Tails contributor meeting will be held on November 6th. The present agenda has “firewall exceptions for user-run local services”, “decide what kind of questions go into the FAQ”, among other topics.

Matthew Finkel has sent a draft proposal with possible solutions for Hidden Services backed by multiple servers. Several comments have been made already, with Nick Mathewson giving a heads-up on the work he has started on merging thoughts and discussions in a new specification.

James B. reported a tutorial on bsdnow.tv describing how to setup Tor relays, bridges, exit nodes and hidden services on FreeBSD. Their last week’s podcast called “A Brief Intorduction” features a live demonstration (beginning at 43:52).

The Guardian Project has made a new release of its chat application for Android systems. ChatSecure v12 (previously known has Gibberbot) contains several new features and is fully integrated with Orbot.


This issue of Tor Weekly News has been assembled by Lunar, dope457, Matt Pagan, Kostas Jakeliūnas, ra, Johannes Fürmann, Karsten Loesing, and Roger Dingledine.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tails 0.21 is out

Tails, The Amnesic Incognito Live System, version 0.21, is out.

All users must upgrade as soon as possible: this release fixes numerous security issues.

Download it now.

Changes

  • Security fixes
    • Don't grant access to the Tor control port for the desktop user. Else, an attacker able to run arbitrary code as this user could obtain the public IP.
    • Don't allow the desktop user to directly change persistence settings. Else, an attacker able to run arbitrary code as this user could leverage this feature to gain persistent root access, as long as persistence is enabled.
    • Install Iceweasel 17.0.10esr with Torbrowser patches.
    • Patch Torbutton to make window resizing closer to what the design says.
  • New features
    • Add a persistence preset for printing settings.
    • Support running Tails off more types of SD cards.
  • Minor improvements
    • Add a KeePassX launcher to the top panel.
    • Improve the bug reporting workflow.
    • Prefer stronger ciphers when encrypting data with GnuPG.
    • Exclude the version string in GnuPG's ASCII armored output.
    • Use the same custom Startpage search URL than the TBB. This apparently disables the new broken "family" filter.
    • Provide a consistent path to the persistent volume mountpoint.
  • Localization
    • Many translation updates all over the place.

See the online Changelog for technical details.

Known issues

  • On some hardware, Vidalia does not start.
  • Longstanding known issues.

What's coming up?

The next Tails release is scheduled for around December 11.

Have a look to our roadmap to see where we are heading to.

Would you want to help? There are many ways you can contribute to Tails. If you want to help, come talk to us!

Tor Weekly News — October 23rd, 2013

Welcome to the seventeenth issue of Tor Weekly News, the weekly newsletter that covers what is happening in the Tor community.

Tor’s anonymity and guards parameters

In a lengthly blog post, Roger Dingledine looked back on three research papers published in the past year. Some of them have been covered and most of the time misunderstood by the press. A good recap of the research problems, what the findings mean and possible solutions hopefully will help everyone understand better.

Introduced in 2005, entry guards were added to recognise that “some circuits are going to be compromised, but it’s better to increase your probability of having no compromised circuits at the expense of also increasing the proportion of your circuits that will be compromised if any of them are.” Roger “originally picked ‘one or two months’ for guard rotation” but the initial parameters called for more in-depth research.

That call was heard by “the Tor research community, and it’s great that Tor gets such attention. We get this attention because we put so much effort into making it easy  for researchers to analyze Tor.” In his writing Roger highlights the finding of three papers. Two of them published at WPES 2012 and Oakland 2013, and another upcoming at CCS 2013.

These research efforts highlighted several issues in the way Tor handles entry guards. Roger details five complementary fixes: using fewer guards, keeping the same guards for longer, better handling of brief unreachability of a guard, making the network bigger, and smarter assignment of the guard flag to relays. Some will require further research to identify the best solution. There are also other aspects regarding systems which don’t currently record guards such as Tails, how pluggable transports could prevent attackers from recognising Tor users, or enhancing measurements from the bandwidth authorities…

The whole blog post is insightful and is a must read for everyone who wishes to better understand some of Tor’s risk mitigation strategies. It is also full of little and big things where you could make a difference!

Hidden Service research

George Kadianakis posted a list of items that need work in the Hidden Service area. Despite not being exhaustive, the list contains many items that might help with upgrading the Hidden Service design, be it around security, performance, guard issues or “petname” systems.

Help and comments are welcome!

Usability issues in existing OTR clients

The consensus after the first round of discussions and research done in the prospect of providing a new secure instant-messaging Tor bundle is to use Mozilla Instantbird at its core. Arlo Breault sent out a draft plan on how to do so.

Instantbird currently lacks a core feature to turn it into the Tor Messenger: support for the OTR protocol for encrypted chat. Now is thus a good time to gather usability issues in existing OTR clients.

Mike Perry kicked off the discussion by pointing out several deficiencies regarding problems with multiple clients, key management issues, and other sub-optimal behaviour.

Ian Goldberg — original author of the pervasive OTR plugin for Pidgin — pointed out that at least one of the behaviour singled out by Mike was “done on purpose. The thing it’s trying to prevent is that Alice and Bob are chatting, and Bob ends OTR just before Alice hits Enter on her message. If Alice’s client went to ‘Not private’ instead of ‘Finished’, Alice’s message would be sent in the clear, which is undesirable. Switching to ‘Finished’ makes Alice have to actively acknowledge that the conversation is no longer secure.”

This tradeoff is a good example of how designing usable and secure user interfaces can be hard. Usability, in itself, is an often overlooked security feature. Now is a good time to contribute your ideas!

Tor Help Desk Roundup

The Tor Help Desk continues to be bombarded with help requests from users behind university proxies who cannot use ORPort bridges or the Pluggable Transports Browser to circumvent their network’s firewall. Although the cases are not all the same, bridges on port 443 or port 80 do not always suffice to circumvent such proxies.

Ubuntu 13.10 (Saucy Salamander) was released this week. One user reported their Tor Browser Bundle behaving unusually after updating their Ubuntu operating system. This issue was resolved by switching to the Tor Browser Bundle 3. Another user asked when Tor APT repositories would have packages for Saucy Salamander. Since then, packages for the latest version of Ubuntu have been made available from the usual deb.torproject.org.

Miscellaneous news

Tails has issued a call for testing of its upcoming 0.21 release. The new version contains two security fixes regarding access to the Tor control port and persistent settings among other improvements and package updates. “Test wildly!” as the Tails team wrote.

Andrew Lewman was invited to speak at SECURE Poland 2013 and sent a report on his trip to Warsaw.

Tails developers are looking for Mac and PC hardware with UEFI. If you have some spare hardware, please consider a donation!

Ximin Luo has been the first to create a ticket with 5 digits on Tor tracker. At the current rate, ticket #20000 should happen by the end of 2015… Or will the project’s continued growth make this happen sooner?

Roger Dingledine reported on his activities for September and October. Arturo Filastò also reported on his September.

Runa Sandvik continues her work on the new, more comprehensible Tor User Manual. The first draft is already out. Please review and contribute.

Aaron published a branch with his work on a Tor exit scanner based on OONI.


This issue of Tor Weekly News has been assembled by Lunar, Matt Pagan, dope457, George Kadianakis, Philipp Winter and velope.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Improving Tor's anonymity by changing guard parameters

There are tensions in the Tor protocol design between the anonymity provided by entry guards and the performance improvements from better load balancing. This blog post walks through the research questions I raised in 2011, then summarizes answers from three recent papers written by researchers in the Tor community, and finishes by explaining what Tor design changes we need to make to provide better anonymity, and what we'll be trading off.

Part one: The research questions

In Tor, each client selects a few relays at random, and chooses only from those relays when making the first hop of each circuit. This entry guard design helps in three ways:

First, entry guards protect against the "predecessor attack": if Alice (the user) instead chose new relays for each circuit, eventually an attacker who runs a few relays would be her first and last hop. With entry guards, the risk of end-to-end correlation for any given circuit is the same, but the cumulative risk for all her circuits over time is capped.

Second, they help to protect against the "denial of service as denial of anonymity" attack, where an attacker who runs quite a few relays fails any circuit that he's a part of and that he can't win against, forcing Alice to generate more circuits and thus increasing the overall chance that the attacker wins. Entry guards greatly reduce the risk, since Alice will never choose outside of a few nodes for her first hop.

Third, entry guards raise the startup cost to an adversary who runs relays in order to trace users. Without entry guards, the attacker can sign up some relays and immediately start having chances to observe Alice's circuits. With them, new adversarial relays won't have the Guard flag so won't be chosen as the first hop of any circuit; and even once they earn the Guard flag, users who have already chosen guards won't switch away from their current guards for quite a while.

In August 2011, I posted these four open research questions around guard rotation parameters:

  1. Natural churn: For an adversary that controls a given number of relays, if the user only replaces her guards when the current ones become unavailable, how long will it take until she's picked an adversary's guard?
  2. Artificial churn: How much more risk does she introduce by intentionally switching to new guards before she has to, to load balance better?
  3. Number of guards: What are the tradeoffs in performance and anonymity from picking three guards vs two or one? By default Tor picks three guards, since if we picked only one then some clients would pick a slow one and be sad forever. On the other hand, picking only one makes users safer.
  4. Better Guard flag assignment: If we give the Guard flag to more or different relays, how much does it change all these answers?

For reference, Tor 0.2.3's entry guard behavior is "choose three guards, adding another one if two of those three go down but going back to the original ones if they come back up, and also throw out (aka rotate) a guard 4-8 weeks after you chose it." I'll discuss in "Part three" of this post what changes we should make to improve this policy.

Part two: Recent research papers

Tariq Elahi, a grad student in Ian Goldberg's group in Waterloo, began to answer the above research questions in his paper Changing of the Guards: A Framework for Understanding and Improving Entry Guard Selection in Tor (published at WPES 2012). His paper used eight months of real-world historical Tor network data (from April 2011 to December 2011) and simulated various guard rotation policies to see which approaches protect users better.

Tariq's paper considered a quite small adversary: he let all the clients pick honest guards, and then added one new small guard to the 800 or so existing guards. The question is then what fraction of clients use this new guard over time. Here's a graph from the paper, showing (assuming all users pick three guards) the vulnerability due to natural churn ("without guard rotation") vs natural churn plus also intentional guard rotation:

Vulnerability from natural vs intentional guard rotation

In this graph their tiny guard node, in the "without guard rotation" scenario, ends up getting used by about 3% of the clients in the first few months, and gets up to 10% by the eight-month mark. The more risky scenario — which Tor uses today — sees the risk shoot up to 14% in the first few months. (Note that the y-axis in the graph only goes up to 16%, mostly because the attacking guard is so small.)

The second paper to raise the issue is from Alex Biryukov, Ivan Pustogarov, and Ralf-Philipp Weinmann in Luxembourg. Their paper Trawling for Tor Hidden Services: Detection, Measurement, Deanonymization (published at Oakland 2013) mostly focuses on other attacks (like how to censor or track popularity of hidden services), but their Section VI.C. talks about the "run a relay and wait until the client picks you as her guard" attack. In this case they run the numbers for a much larger adversary: if they run 13.8% of the Tor network for eight months there's more than a 90% chance of a given hidden service using their guard sometime during that period. That's a huge fraction of the network, but it's also a huge chance of success. And since hidden services in this case are basically the same as Tor clients (they choose guards and build circuits the same way), it's reasonable to conclude that their attack works against normal clients too so long as the clients use Tor often enough during that time.

I should clarify three points here.

First clarifying point: Tariq's paper makes two simplifying assumptions when calling an attack successful if the adversary's relay *ever* gets into the user's guard set. 1) He assumes that the adversary is also either watching the user's destination (e.g. the website she's going to), or he's running enough exit relays that he'll for sure be able to see the correponding flow out of the Tor network. 2) He assumes that the end-to-end correlation attack (matching up the incoming flow to the outgoing flow) is instantaneous and perfect. Alex's paper argues pretty convincingly that these two assumptions are easier to make in the case of attacking a hidden service (since the adversary can dictate how often the hidden service makes a new circuit, as well as what the traffic pattern looks like), and the paper I describe next addresses the first assumption, but the second one ("how successful is the correlation attack at scale?" or maybe better, "how do the false positives in the correlation attack compare to the false negatives?") remains an open research question.

Researchers generally agree that given a handful of traffic flows, it's easy to match them up. But what about the millions of traffic flows we have now? What levels of false positives (algorithm says "match!" when it's wrong) are acceptable to this attacker? Are there some simple, not too burdensome, tricks we can do to drive up the false positives rates, even if we all agree that those tricks wouldn't work in the "just looking at a handful of flows" case?

More precisely, it's possible that correlation attacks don't scale well because as the number of Tor clients grows, the chance that the exit stream actually came from a different Tor client (not the one you're watching) grows. So the confidence in your match needs to grow along with that or your false positive rate will explode. The people who say that correlation attacks don't scale use phrases like "say your correlation attack is 99.9% accurate" when arguing it. The folks who think it does scale use phrases like "I can easily make my correlation attack arbitrarily accurate." My hope is that the reality is somewhere in between — correlation attacks in the current Tor network can probably be made plenty accurate, but perhaps with some simple design changes we can improve the situation. In any case, I'm not going to try to tackle that research question here, except to point out that 1) it's actually unclear in practice whether you're done with the attack if you get your relay into the user's guard set, or if you are now faced with a challenging flow correlation problem that could produce false positives, and 2) the goal of the entry guard design is to make this issue moot: it sure would be nice to have a design where it's hard for adversaries to get into a position to see both sides, since it would make it irrelevant how good they are at traffic correlation.

Second clarifying point: it's about the probabilities, and that's intentional. Some people might be scared by phrases like "there's an x% chance over y months to be able to get an attacker's relay into the user's guard set." After all, they reason, shouldn't Tor provide absolute anonymity rather than probabilistic anonymity? This point is even trickier in the face of centralized anonymity services that promise "100% guaranteed" anonymity, when what they really mean is "we could watch everything you do, and we might sell or give up your data in some cases, and even if we don't there's still just one point on the network where an eavesdropper can learn everything." Tor's path selection strategy distributes trust over multiple relays to avoid this centralization. The trouble here isn't that there's a chance for the adversary to win — the trouble is that our current parameters make that chance bigger than it needs to be.

To make it even clearer: the entry guard design is doing its job here, just not well enough. Specifically, *without* using the entry guard design, an adversary who runs some relays would very quickly find himself as the first hop of one of the user's circuits.

Third clarifying point: we're considering an attacker who wants to learn if the user *ever* goes to a given destination. There are plenty of reasonable other things an attacker might be trying to learn, like building a profile of many or all of the user's destinations, but in this case Tariq's paper counts a successful attack as one that confirms (subject to the above assumptions) that the user visited a given destination once.

And that brings us to the third paper, by Aaron Johnson et al: Users Get Routed: Traffic Correlation on Tor by Realistic Adversaries (upcoming at CCS 2013). This paper ties together two previous series of research papers: the first is "what if the attacker runs a relay?" which is what the above two papers talked about, and the second is "what if the attacker can watch part of the Internet?"

The first part of the paper should sound pretty familiar by now: they simulated running a few entry guards that together make up 10% of the guard capacity in the Tor network, and they showed that (again using historical Tor network data, but this time from October 2012 to March 2013) the chance that the user has made a circuit using the adversary's relays is more than 80% by the six month mark.

In this case their simulation includes the adversary running a fast exit relay too, and the user performs a set of sessions over time. They observe that the user's traffic passes over pretty much all the exit relays (which makes sense since Tor doesn't use an "exit guard" design). Or summarizing at an even higher level, the conclusion is that so long as the user uses Tor enough, this paper confirms the findings in the earlier two papers.

Where it gets interesting is when they explain that "the adversary could run a relay" is not the only risk to worry about. They build on the series of papers started by "Location Diversity in Anonymity Networks" (WPES 2004), "AS-awareness in Tor path selection" (CCS 2009), and most recently "An Empirical Evaluation of Relay Selection in Tor" (NDSS 2013). These papers look at the chance that traffic from a given Tor circuit will traverse a given set of Internet links.

Their point, which like all good ideas is obvious in retrospect, is that rather than running a guard relay and waiting for the user to switch to it, the attacker should instead monitor as many Internet links as he can, and wait for the user to use a guard such that traffic between the user and the guard passes over one of the links the adversary is watching.

This part of the paper raises as many questions as it answers. In particular, all the users they considered are in or near Germany. There are also quite a few Tor relays in Germany. How much of their results here can be explained by pecularities of Internet connectivity in Germany? Are their results predictive in any way about how users on other continents would fare? Or said another way, how can we learn whether their conclusion shouldn't instead be "German Tor users are screwed, because look how Germany's Internet topology is set up"? Secondly, their scenario has the adversary control the Autonomous System (AS) or Internet Exchange Point (IXP) that maximally deanonymizes the user (they exclude the AS that contains the user and the AS that contains her destinations). This "best possible point to attack" assumption a) doesn't consider how hard it is to compromise that particular part of the Internet, and b) seems like it will often be part of the Internet topology near the user (and thus vary greatly depending on which user you're looking at). And third, like the previous papers, they think of an AS as a single Internet location that the adversary is either monitoring or not monitoring. Some ASes, like large telecoms, are quite big and spread out.

That said, I think it's clear from this paper that there *do* exist realistic scenarios where Tor users are at high risk from an adversary watching the nearby Internet infrastructure and/or parts of the Internet backbone. Changing the guard rotation parameters as I describe in "Part three" below will help in some of these cases but probably won't help in all of them. The canonical example that I've given in talks about "a person in Syria using Tor to visit a website in Syria" remains a very serious worry.

The paper also makes me think about exit traffic patterns, and how to better protect people who use Tor for only a short period of time: many websites pull in resources from all over, especially resources from centralized ad sites. This risk (that it greatly speeds the rate at which an adversary watching a few exit points — or heck, a few ad sites — will be able to observe a given user's exit traffic) provides the most compelling reason I've heard so far to ship Tor Browser Bundle with an ad blocker — or maybe better, with something like Request Policy that doesn't even touch the sites in the first place. On the other hand, Mike Perry still doesn't want to ship an ad blocker in TBB, since he doesn't want to pick a fight with Google and give them even more of a reason to block/drop all Tor traffic. I can see that perspective too.

Part three: How to fix it

Here are five steps we should take, in rough order of how much impact I think each of them would have on the above attacks.

If you like metaphors, think of each time you pick a new guard as a coin flip (heads you get the adversary's guard, tails you're safe this time), and the ideas here aim to reduce both the number and frequency of coin flips.

Fix 1: Tor clients should use fewer guards.

The primary benefit to moving to fewer guards is that there are fewer coin flips every time you pick your guards.

But there's a second benefit as well: right now your choice of guards acts as a kind of fingerprint for you, since very few other users will have picked the same three guards you did. (This fingerprint is only usable by an attacker who can discover your guard list, but in some scenarios that's a realistic attack.) To be more concrete: if the adversary learns that you have a particular three guards, and later sees an anonymous user with exactly the same guards, how likely is it to be you? Moving to two guards helps the math a lot here, since you'll overlap with many more users when everybody is only picking two.

On the other hand, the main downside is increased variation in performance. Here's Figure 10 from Tariq's paper:

Average guard capacity for various numbers of guards

"Farther to the right" is better in this graph. When you pick three guards (the red line), the average speed of your guards is pretty good (and pretty predictable), since most guards are pretty fast and it's unlikely you'll pick slow ones for all three. However, when you only pick only one guard (the purple line), the odds go up a lot that you get unlucky and pick a slow one. In more concrete numbers, half of the Tor users will see up to 60% worse performance.

The fix of course is to raise the bar for becoming a guard, so every possible guard will be acceptably fast. But then we have fewer guards total, increasing the vulnerability from other attacks! Finding the right balance (as many guards as possible, but all of them fast) is going to be an ongoing challenge. See Brainstorm tradeoffs from moving to 2 (or even 1) guards (ticket 9273) for more discussion.

Switching to just one guard will also preclude deploying Conflux, a recent proposal to improve Tor performance by routing traffic over multiple paths in parallel. The Conflux design is appealing because it not only lets us make better use of lower-bandwidth relays (which we'll need to do if we want to greatly grow the size of the Tor network), but it also lets us dynamically adapt to congestion by shifting traffic to less congested routes. Maybe some sort of "guard family" idea can work, where a single coin flip chooses a pair of guards and then we split our traffic over them. But if we want to avoid doubling the exposure to a network-level adversary, we might want to make sure that these two guards are near each other on the network — I think the analysis of the network-level adversary in Aaron's paper is the strongest argument for restricting the variety of Internet paths that traffic takes between the Tor client and the Tor network.

This discussion about reducing the number of guards also relates to bridges: right now if you configure ten bridges, you round-robin over all of them. It seems wise for us to instead use only the first bridge in our bridge list, to cut down on the set of Internet-level adversaries that get to see the traffic flows going into the Tor network.

Fix 2: Tor clients should keep their guards for longer.

In addition to choosing fewer guards, we should also avoid switching guards so often. I originally picked "one or two months" for guard rotation since it seemed like a very long time. In Tor 0.2.4, we've changed it to "two or three months". But I think changing the guard rotation period to a year or more is probably much wiser, since it will slow down the curves on all the graphs in the above research papers.

I asked Aaron to make a graph comparing the success of an attacker who runs 10% of the guard capacity, in the "choose 3 guards and rotate them every 1-2 months" case and the "choose 1 guard and never rotate" case:

Vulnerability from natural vs intentional guard rotation

In the "3 guard" case (the blue line), the attacker's success rate rapidly grows to about 25%, and then it steadily grows to over 80% by the six month mark. The "1 guard" case (green line), on the other hand, grows to 10% (which makes sense since the adversary runs 10% of the guards), but then it levels off and grows only slowly as a function of network churn. By the six month mark, even this very large adversary's success rate is still under 25%.

So the good news is that by choosing better guard rotation parameters, we can almost entirely resolve the vulnerabilities described in these three papers. Great!

Or to phrase it more as a research question, once we get rid of this known issue, I'm curious how the new graphs over time will look, especially when we have a more sophisticated analysis of the "network observer" adversary. I bet there are some neat other attacks that we'll need to explore and resolve, but that are being masked by the poor guard parameter issue.

However, fixing the guard rotation period issue is alas not as simple as we might hope. The fundamental problem has to do with "load balancing": allocating traffic onto the Tor network so each relay is used the right amount. If Tor clients choose a guard and stick with it for a year or more, then old guards (relays that have been around and stable for a long time) will see a lot of use, and new guards will see very little use.

I wrote a separate blog post to provide background for this issue: "The lifecycle of a new relay". Imagine if the ramp-up period in the graph from that blog post were a year long! People would set up fast relays, they would get the Guard flag, and suddenly they'd see little to no traffic for months. We'd be throwing away easily half of the capacity volunteered by relays.

One approach to resolving the conflict would be for the directory authorities to track how much of the past n months each relay has had the Guard flag, and publish a fraction in the networkstatus consensus. Then we'd teach clients to rebalance their path selection choices so a relay that's been a Guard for only half of the past year only counts 50% as a guard in terms of using that relay in other positions in circuits. See Load balance right when we have higher guard rotation periods (ticket 9321) for more discussion, and see Raise our guard rotation period (ticket 8240) for earlier discussions.

Yet another challenge here is that sticking to the same guard for a year gives plenty of time for an attacker to identify the guard and attack it somehow. It's particularly easy to identify the guard(s) for hidden services currently (since as mentioned above, the adversary can control the rate at which hidden services make new circuits, simply by visiting the hidden service), but similar attacks can probably be made to work against normal Tor clients — see e.g. the http-level refresh tricks in How Much Anonymity does Network Latency Leak? This attack would effectively turn Tor into a network of one-hop proxies, to an attacker who can efficiently enumerate guards. That's not a complete attack, but it sure does make me nervous.

One possible direction for a fix is to a) isolate streams by browser tab, so all the requests from a given browser tab go to the same circuit, but different browser tabs get different circuits, and then b) stick to the same three-hop circuit (i.e. same guard, middle, and exit) for the lifetime of that session (browser tab). How to slow down guard enumeration attacks is a tough and complex topic, and it's too broad for this blog post, but I raise the issue here as a reminder of how interconnected anonymity attacks and defenses are. See Slow Guard Discovery of Hidden Services and Clients (ticket 9001) for more discussion.

Fix 3: The Tor code should better handle edge cases where you can't reach your guard briefly.

If a temporary network hiccup makes your guard unreachable, you switch to another one. But how long is it until you switch back? If the adversary's goal is to learn whether you ever go to a target website, then even a brief switch to a guard that the adversary can control or observe could be enough to mess up your anonymity.

Tor clients fetch a new networkstatus consensus every 2-4 hours, and they are willing to retry non-running guards if the new consensus says they're up again.

But I think there are a series of little bugs and edge cases where the Tor client abandons a guard more quickly than it should. For example, we mark a guard as failed if any of our circuit requests time out before finishing the handshake with the first hop. We should audit both the design and the source code with an eye towards identifying and resolving these issues.

We should also consider whether an adversary can *induce* congestion or resource exhaustion to cause a target user to switch away from her guard. Such an attack could work very nicely coupled with the guard enumeration attacks discussed above.

Most of these problems exist because in the early days we emphasized reachability ("make sure Tor works") over anonymity ("be very sure that your guard is gone before you try another one"). How should we handle this tradeoff between availability and anonymity: should you simply stop working if you've switched guards too many times recently? I imagine different users would choose different answers to that tradeoff, depending on their priorities. It sounds like we should make it easier for users to select "preserve my anonymity even if it means lower availability". But at the same time, we should remember the lessons from Anonymity Loves Company: Usability and the Network Effect about how letting users choose different settings can make them more distinguishable.

Fix 4: We need to make the network bigger.

We've been working hard in recent years to get more relay capacity. The result is a more than four-fold increase in network capacity since 2011:

Vulnerability from natural vs intentional guard rotation

As the network grows, an attacker with a given set of resources will have less success at the attacks described in this blog post. To put some numbers on it, while the relay adversary in Aaron's paper (who carries 660mbit/s of Tor traffic) represented 10% of the guard capacity in October 2012, that very same attacker would have been 20% of the guard capacity in October 2011. Today that attacker is about 5% of the guard capacity. Growing the size of the network translates directly into better defense against these attacks.

However, the analysis is more complex when it comes to a network adversary. Just adding more relays (and more relay capacity) doesn't always help. For example, adding more relay capacity in a part of the network that the adversary is already observing can actually *decrease* anonymity, because it increases the fraction the adversary can watch. We discussed many of these issues in the thread about turning funding into more exit relays. For more details about the relay distribution in the current Tor network, check out Compass, our tool to explore what fraction of relay capacity is run in each country or AS. Also check out Lunar's relay bubble graphs.

Yet another open research question in the field of anonymous communications is how the success rate of a network adversary changes as the Tor network changes. If we were to plot the success rate of the *relay* adversary using historical Tor network data over time, it's pretty clear that the success rate would be going down over time as the network grows. But what's the trend for the success rate of the network adversary over the past few years? Nobody knows. It could be going up or down. And even if it is going down, it could be going down quickly or slowly.

(Read more in Research problem: measuring the safety of the Tor network where I describe some of these issues in more detail.)

Recent papers have gone through enormous effort to get one, very approximate, snapshot of the Internet's topology. Doing that effort retroactively and over long and dynamic time periods seems even more difficult and more likely to introduce errors.

It may be that the realities of Internet topology centralization make it so that there are fundamental limits on how much safety Tor users can have in a given network location. On the other hand, researchers like Aaron Johnson are optimistic that "network topology aware" path selection can improve Tor's protection against this style of attack. Much work remains.

Fix 5: We should assign the guard flag more intelligently.

In point 1 above I talked about why we need to raise the bar for becoming a guard, so all guards can provide adequate bandwidth. On the other hand, having fewer guards is directly at odds with point 4 above.

My original guard rotation parameters blog post ends with this question: what algorithm should we use to assign Guard flags such that a) we assign the flag to as many relays as possible, yet b) we minimize the chance that Alice will use the adversary's node as a guard?

We should use historical Tor network data to pick good values for the parameters that decide which relays become guards. This remains a great thesis topic if somebody wants to pick it up.

Part four: Other thoughts

What does all of this discussion mean for the rest of Tor? I'll close by trying to tie this blog post to the broader Tor world.

First, all three of these papers come from the Tor research community, and it's great that Tor gets such attention. We get this attention because we put so much effort into making it easy for researchers to analyze Tor: we've worked closely with these authors to help them understand Tor and focus on the most pressing research problems.

In addition, don't be fooled into thinking that these attacks only apply to Tor: using Tor is still better than using any other tool, at least in quite a few of these scenarios. That said, some other attacks in the research literature might be even easier than the attacks discussed here. These are fast-moving times for anonymity research. "Maybe you shouldn't use the Internet then" is still the best advice for some people.

Second, the Tails live CD doesn't use persistent guards. That's really bad I think, assuming the Tails users have persistent behavior (which basically all users do). See their ticket 5462.

Third, the network-level adversaries rely on being able to recognize Tor flows. Does that argue that using pluggable transports, with bridges, might change the equation if it stops the attacker from recognizing Tor users?

Fourth, I should clarify that I don't think any of these large relay-level adversaries actually exist, except as a succession of researchers showing that it can be done. (GCHQ apparently ran a small number of relays a while ago, but not in a volume or duration that would have enabled this attack.) Whereas I *do* think that the network-level attackers exist, since they already invested in being able to surveil the Internet for other reasons. So I think it's great that Aaron's paper presents the dual risks of relay adversaries and link adversaries, since most of the time when people are worrying about one of them they're forgetting the other one.

Fifth, there are still some ways to game the bandwidth authority measurements (here's the spec spec) into giving you more than your fair share of traffic. Ideally we'd adapt a design like EigenSpeed so it can measure fast relays both robustly and accurately. This question also remains a great thesis topic.

And finally, as everybody wants to know: was this attack how "they" busted recent hidden services (Freedom Hosting, Silk Road, the attacks described in the latest Guardian article)? The answer is apparently no in each case, which means the techniques they *did* use were even *lower* hanging fruit. The lesson? Security is hard, and you have to get it right at many different levels.

Tor Weekly News — October 16th, 2013

Welcome to the sixteenth issue of Tor Weekly News, the weekly newsletter that covers what’s happening in the venerable Tor community.

Making hidden services more scalable and harder to locate

Christopher Baines started a discussion on tor-dev on the scaling issues affecting the current design of Tor hidden services. Nick Mathewson later described Christopher’s initial proposal as “single hidden service descriptor, multiple service instances per intro point” along with three other alternatives. Nick and Christopher also teamed up to produce a set of seven goals that a new hidden service design should aim for regarding scalability.

There’s probably more to discuss regarding which goals are the most desirable, and which designs are likely to address them, without — as always — harming anonymity.

George Kadianakis also called for help concerning “the guard enumeration attack that was described by the Trawling for Tor Hidden Services: Detection, Measurement, Deanonymization paper (in section VII).”

The most popular solution so far seems to be enabling a client or hidden service to reuse some parts of a circuit that cannot be completed successfully in order to connect to new nodes. This should “considerably slow the attack”, but “might result in unexpected attacks” as George puts it.

These problems could benefit from everyone’s attention. Free to read the threads in full and offer your insights!

Detecting malicious exit nodes

Philipp Winter asked for feedback on the technical architecture of “a Python-based exit relay scanner which should detect malicious and misbehaving exits.”

Aaron took the opportunity to mention his plans to leverage the work done as part of OONI to detect network interference. Aaron’s intention is to “provide Tor network tests as part of ooni-probe’s standard set of tests, so that many individuals will measure the Tor network and automatically publish their results, and so that current and future network interference tests can be easily adapted to running on the Tor network.”

Detecting misbehaving exits so they can be flagged “BadExit” by the operators of the directory authorities is important to make every Tor user safer. Getting more users to run tests against our many exit nodes would benefit everyone — it makes it more likely that bad behavior will be caught as soon as possible.

Hiding location at the hardware level

One of Tor’s goals is to hide the location of its users, and it does a reasonably good job of this for Internet connections. But when your threat model includes an adversary that can monitor which equipment is connected to a local network, or monitor Wi-Fi network probes received by many access points, extra precautions must be taken.

Ethernet and Wi-Fi cards ship with a factory-determined hardware address (also called a MAC address) that can uniquely identify a computer across networks. Thankfully, most devices allow their hardware address to be changed by an operating system.

As the Tails live operating system aims to protect the privacy and anonymity of its users, it has long been suggested that it should automatically randomize MAC addresses. Some important progress has been made, and this week anonym requested comments on a detailed analysis of why, when and how Tails should randomize MAC addresses.

In this analysis, anonym describes a Tails user wanting to hide their geographical movement and not be identified as using Tails, but who also wants to “avoid alarming the local administrators (imagine a situation where security guards are sent to investigate an ‘alien computer’ at your workplace, or similar)” and “avoid network connection problems due to MAC address white-listing, hardware or driver issues, or similar”.

The analysis then tries to understand when MAC address should be randomized depending on several combinations of locations and devices. The outcome is that “this feature is enabled by default, with the possibility to opt-out.” anonym then delves into user interface and implementation considerations.

If you are interested in the analysis, or curious about how you could help with the proposed implementation, be sure to have a look!

Tor Help Desk Roundup

The Tor project wishes to expand its support channels to text-based instant messaging as part of the Boisterous Otter project. Lunar and Colin C. came up with a possible implementation based on the XMPP protocol, Prosody for the server side, and Prodromus as the basis for the web based interface.

This week, multiple people asked if Tor worked well on the Raspberry Pi. Although the Tor Project does not have any documentation directed specifically at the Raspberry Pi (yet!), the issue was raised on Tor’s StackExchange page. Tor relay operators are encouraged to share their experiences or ask for help on the tor-relays public mailing list.

Miscellaneous news

Ten years ago, on October 8th, 2003, Roger Dingledine announced the first release of tor as free software on the or-dev mailing list.

Damian Johnson announced the release of Stem 1.1.0. The new version of this “Python library for interacting with Tor” adds remote descriptor fetching, connection resolution and a myriad of small improvements and fixes.

Arlo Breault sent out a detailed plan on how Mozilla Instantbird could be turned into the Tor Messenger. Feedback would be welcome, especially with regard to sandboxing, auditing, and internationalization for right-to-left languages.

The Spanish and German version of the Tails website are outdated and may soon be disabled . Now is a good time to help if you want to keep those translations running!

adrelanos announced the release of Whonix 7, an operating system “based on Tor […], Debian GNU/Linux and the principle of security by isolation.” The new version includes tor 0.2.4, Tor Browser as the system default browser and a connection wizard, among other changes.

Lunar sent out a report about the Dutch OHM2013 hacker camp that took place in August.

Philipp Winter, Justin Bull and rndm made several improvements to Atlas, the web application for learning more about currently-running Tor relays. The HSDir flag is now properly displayed (#9911), full country names and flags are shown instead of two-letter codes (#9914) and it’s now easier to find out how long a relay has been down (#9814).

ra, one of Tor’s former GSoC students, proposed a patch to add a command to the Tor control protocol asking tor to pick a completely new set of guards.

starlight shared some simple shell snippets that connect to the Tor control port in order to limit the bandwidth used by a relay while running backups.


This issue of Tor Weekly News has been assembled by Lunar, Matt Pagan, harmony, dope457, Damian Johnson and Philipp Winter.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Stem Release 1.1

in

Hi all. After seven months of work I'm pleased to announce Stem's 1.1.0 release!

For those who aren't familiar with it, Stem is a Python library for interacting with Tor. With it you can script against your relay, descriptor data, or even write applications similar to arm and Vidalia.

https://stem.torproject.org/

So what's new in this release?


Remote Descriptor Fetching

The stem.descriptor.remote module allows you download current Tor descriptor data from directory authorities and mirrors, much like Tor itself does. With this you can easily check the status of the network without piggybacking on a local instance of Tor (or even having it installed).

For example...

from stem.descriptor.remote import DescriptorDownloader

downloader = DescriptorDownloader()

try:
  for desc in downloader.get_consensus().run():
    print "found relay %s (%s)" % (desc.nickname, desc.fingerprint)
except Exception as exc:
  print "Unable to retrieve the consensus: %s" % exc



Connection Resolution

One of arm's most popular features has been its ability to monitor Tor's connections, and stem can now do the same! Lookups are performed via seven *nix and FreeBSD resolvers, for more information and an example see our tutorials.


Numerous Features and Fixes

For a rundown of other changes see...

https://stem.torproject.org/change_log.html#version-1-1

Cheers! -Damian

Syndicate content