Tor Browser 3.5.2 is released

The 3.5.2 release of the Tor Browser Bundle is now available on the Download page. You can also download the bundles directly from the distribution directory.

This release includes important security updates to Firefox.

Please see the TBB FAQ listing for any issues you may have before contacting support or filing tickets. In particular, the TBB 3.x section lists common issues specific to the Tor Browser 3.x series.

Here is the list of changes since 3.5.1. The 3.x ChangeLog is also available.

  • Rebase Tor Browser to Firefox 24.3.0ESR
  • Bug 10419: Block content window connections to localhost
  • Update Torbutton to
    • Bug 10800: Prevent findbox exception and popup in New Identity
    • Bug 10640: Fix about:tor's update pointer position for RTL languages.
    • Bug 10095: Fix some cases where resolution is not a multiple of 200x100
    • Bug 10374: Clear site permissions on New Identity
    • Bug 9738: Fix for auto-maximizing on browser start
    • Bug 10682: Workaround to really disable updates for Torbutton
    • Bug 10419: Don't allow connections to localhost if Torbutton is toggled
    • Bug 10140: Move Japanese to extra locales (not part of TBB dist)
    • Bug 10687: Add Basque (eu) to extra locales (not part of TBB dist)
  • Update Tor Launcher to
    • Bug 10682: Workaround to really disable updates for Tor Launcher
  • Update NoScript to

Tails 0.22.1 is out

Tails, The Amnesic Incognito Live System, version 0.22.1, is out.

All users must upgrade as soon as possible: this release fixes numerous security issues.


Notable user-visible changes include:

  • Security fixes
    • Upgrade the web browser to 24.3.0esr, that fixes a few serious security issues.
    • Upgrade the system NSS to 3.14.5, that fixes a few serious security issues.
    • Workaround a browser size fingerprinting issue by using small icons in the web browser's navigation toolbar.
    • Upgrade Pidgin to 2.10.8, that fixes a number of serious security issues.
  • Major improvements
    • Check for upgrades availability using Tails Upgrader, and propose to apply an incremental upgrade whenever possible.
    • Install Linux 3.12 (3.12.6-2).
  • Bugfixes
    • Fix the keybindings problem introduced in 0.22.
    • Fix the Unsafe Browser problem introduced in 0.22.
    • Use IE's icon in Windows camouflage mode.
    • Handle some corner cases better in Tails Installer.
    • Use the correct browser homepage in Spanish locales.
  • Minor improvements
    • Update Torbutton to
    • Do not start Tor Browser automatically, but notify when Tor
      is ready.
    • Import latest Tor Browser prefs.
    • Many user interface improvements in Tails Upgrader.

See the online Changelog for technical details.

Known issues

I want to try it or to upgrade!

Go to the download page.

As no software is ever perfect, we maintain a list of problems that affects the last release of Tails.

What's coming up?

The next Tails release is scheduled for March 18.

Have a look to our roadmap to see where we are heading to.

Would you want to help? There are many ways you can contribute to Tails. If you want to help, come talk to us!

Support and feedback

For support and feedback, visit the Support section on the Tails website.

Tor Weekly News — February 4th, 2014

Welcome to the fifth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

News from the browser team front

Mike Perry has a detailed report about what the growing Tor Browser team has been up to. Among the good news, new fingerprinting defenses are getting close to be merged for “screen resolution, default character sets, site permissions, and local service enumeration”. Some other changes that will reduce the attack surface include “disabling addon update requests for addons that should not update, a potential fix for a disk leak in the browser’s video cache, […], and a potential fix to prevent the Flash plugin from being loaded into the browser at all until the user actually requests to use it.”

Most censored users currently have to use a separate browser bundle dubbed “pluggable transports bundle”. This has proven quite inconvenient for both users and those trying to support them. Mike reports progress on “unifying the pluggable transport bundles with the official bundles, so that both censored and uncensored users can use the same bundles. […] The progress is sufficient that we are very likely to be able to deploy a 3.6-beta1 release in February to test these unified bundles.”

Another important topic is how the privacy fixes in the Tor Browser can benefit a wider userbase. The team has “continued the merge process with Mozilla, and have worked to ensure that every patch of ours is on their radar […]. Two patches, one for an API we require to manage the Tor subprocess, and another to give us a filter to remove potentially dangerous drag-and-drop events to the desktop have already been merged. Next steps will include filing more bugs, continual contact with their development team, and touching up patches as needed.”

There are even more things to smile about in the report. Read it in full for the whole picture.

Key revocation in next generation hidden services

It looks like every public-key infrastructure struggles with how to handle key revocation. Hidden services are no different. The current design completely ignored preventing a stolen key from being reused by an attacker.

With the on-going effort to create a new protocol for hidden services, now seems to be a good time for George Kadianakis to raise this issue. In the past there was little control for the hidden services operators over their secret key. The new design enables offline management operations which include key revocation.

As George puts it, currently well-known solutions “are always messy and don’t work really well (look at SSL’s OCSP and CRLs).” So how can “the legitimate Hidden Service can inform a client that its keys got compromised”?

In his email, George describes two solutions, one relying on the directory authorities, the other on hidden service directories. Both have drawbacks, so perhaps further research is necessary.

In the same thread, Nick Hopper suggested a scheme that uses multiple hidden service directories to cross-certify their revocation lists. This gives more confidence to the user, since the adversary now has to compromise multiple hidden service directories.

Please join the discussion if you have ideas to share!

Help needed to remove DNS leaks from Mumble

Mumble is a “low-latency, high quality voice chat software primarily intended for use while gaming”.

It’s proven to be a reliable solution for voice chat among multiple parties over Tor. Matt and Colin have worked on a documentation on how to setup both the client and the server side for Tor users.
But the client is currently safely usable only on Linux system with torsocks and on Tails. On other operating systems, the Mumble client will unfortunately leak the address of the server to the local DNS resolver.

The changes that need to be made to the Mumble code are less trivial than one would think. Matt describe the issue in more details in his call for help. Have a look if you are up to some C++/Qt hacking.

Monthly status reports for January 2014

The wave of regular monthly reports from Tor project members for the month of January has begun. Damian Johnson released his report first, followed by reports from Philipp Winter, Sherief Alaa, the Tor Browser team from Mike Perry, Colin C., the help desk, Matt. Lunar, George Kadianakis, and Pearl Crescent.

Miscellaneous news

Nick Mathewson came up with a Python script to convert the new MaxMind GeoIP2 binary database to the format used by Tor for its geolocation database.

Thanks to John Ricketts from Quintex Alliance Consulting for providing another mirror for the Tor Project’s website and software.

Abhiram Chintangal and Oliver Baumann are reporting progress on their rewrite of the Tor Weather service.

Andreas Jonsson gave an update on how Mozilla is moving to a multi-process model for Firefox and how this should positively affect the possibility of sandboxing the Tor Browser in the future.

As planned, to help “developers to analyze the directory protocol and for researchers to understand what information is available to clients to make path selection decisions”, Karsten Loesing has made microdescriptor archives available on the metrics website.

Christian has deployed a test platform for the JavaScript-less version of Globe, a tool to retrieve information about the Tor network and its relays.

In an answer to Shadowman’s questions about pluggable transports, George Kadianakis wrote a detailed reply on how Tor manages pluggable transports, both on the server side an on the client side.

Arthur D. Edelstein has advertised a GreaseMonkey script to enable Tor Browser to access YouTube videos without having JavaScript enabled. Please be aware of the security risks that GreaseMonkey might introduce before using such a solution.

Andrew Lewman reports on his trip to Washington DC where he met Spitfire Strategies to learn about “Tor’s brand, media presence, and ideas for the future”. For a short excerpt: “It’s interesting to get critiques on all our past media appearances; what was good and what could be better. Overall, the team there are doing a great job.”

Lunar accounted for Tor’s presence at FOSDEM, one of the largest free software event in Europe. The project had a small booth shared with Mozilla and there was even a relay operator meetup.

Yan Zhu has released the first version of HTTPS Everywhere for Firefox Mobile. A good news for users of the upcoming Orfox.

Tor help desk roundup

Users often want to know if Tor can make them appear to be coming from a particular country. Although doing so can reduce one’s anonymity, it is documented on our FAQ page.

Orbot users have noticed that installing Orbot to their SD storage can cause Orbot to stop functioning correctly. Installing Orbot to the internal storage has resolved issues for a few users.

News from Tor StackExchange

Rhin is looking for hidden services hosting services. Jens pointed them to but it looks like no there are no gratis hidden services hosters currently available.

Vijay kudal wanted to know how to change the current circuit within shell scripts. Jens Kubieziel gave an answer using expect and hexdump.

Roya saw replying contradictory information with Atlas about the exit node being used. It seems to be a bug in check occuring when multiple nodes are using the same IP address.

This issue of Tor Weekly News has been assembled by Lunar, Matt Pagan, qbi, George Kadianakis, Colin, Sandeep, Paul Feitzinger and Karsten Loesing.

TWN is a community newsletter. It can’t rest upon a single pair of shoulders at all times, especially when those shoulders stand behind a booth for two days straight. So if you want to continue reading TWN, we really need your help! Please see the project page and say “hi” on the team mailing list.

Tor Weekly News — January 29th, 2014

Welcome to the fourth issue of Tor Weekly News in 2014, the weekly newsletter that covers what is happening in the Tor community.

Tor Browser Bundle 3.5.1 is released

An update to the Tor Browser Bundle has been released on January 27th. The new release contains Tor which fixes a bug creating useless extra circuits. It also fixes a denial of service condition in OpenSSL and removes “” from the NoScript whitelist.

Arabic bundles are back after a short hiatus. Support for screen readers is also enabled again and has been confirmed working.

HTTPS Everywhere has been updated to version 3.4.5. It contains a new rule to secure connections to Stack Exchange and its Tor corner.

Look at the blog post for a more detailed changelog. And now, head over to the download page and upgrade!

New Tor denial of service attacks and defenses

Rob Jansen, Florian Tschorsch, Aaron Johnson, and Björn Scheuermann have been working on a new paper entitled The Sniper Attack: Anonymously Deanonymizing and Disabling the Tor Network. As research papers are sometimes hard to fully understand, Rob Jansen has published a new blog post giving an overview of the attacks, the defenses, what has been modified in Tor so far, and what open questions remain.

“We found a new vulnerability in the design of Tor’s flow control algorithm that can be exploited to remotely crash Tor relays. The attack is an extremely low resource attack in which an adversary’s bandwidth may be traded for a target relay’s memory (RAM) at an amplification rate of one to two orders of magnitude” explains Rob.

The authors have been working with Tor developers on integrating defenses before publishing: “Due to our devastating findings, we also
designed three defenses that mitigate our attacks, one of which provably renders the attack ineffective. Defenses have been implemented and deployed into the Tor software to ensure that the Tor network is no longer vulnerable as of Tor version and later.”

Be sure to read the blog post and the paper in full if you want to know more.

Good times at Real World Crypto 2014

On the second week of January, a bunch of Tor developers attended the Real World Crypto workshop in New York City.

The workshop featured a nice blend of industry and academic crypto talks and a fruitful hallway track. Many researchers involved with Tor and privacy technologies were also present.

As far as talks were concerned, Tom Shrimpton presented the Format-Transforming Encryption (FTE) traffic obfuscation tool which is currently being developed to work as a Tor pluggable transport. The Tor developers present also worked with Kevin Dyer, one of the paper authors and developers of FTE, towards including FTE in the Pluggable Transport Tor bundles.

On the censorship circumvention front, I2P developers showed interest in using pluggable transports. Work has been done to identify various problems with the current PT spec that need to be fixed so that other projects can use pluggable transports more smoothly.

Furthermore, there were talks with the developers of UProxy (a censorship circumvention tool made by Google) and helped them understand how pluggable transports work and what they would need to do if they wanted to use them in UProxy. They seemed interested and motivated to work on this.

The Tor developers also worked on the Next Generation Hidden Services project, and sketched out some ways to move forward even though there are some open research questions with the current design.

Nick Mathewson commented on IRC: “I think the hallway track to main conference utility ratio was higher than usual, since the conference actually sticks practitioners and cryptographers in the same room pretty reliably.” Let’s hope for next year!

The media and some terminology

BusinessWeek published The inside story of Tor, the best Internet anonymity tool the government ever built. Better that what one can usually read about Tor in the press, the piece — courtesy of Dune Lawrence — still sparkled a discussion on the tor-talk mailing list about terminology.

Katya Titov quoted a misleading part of the article: “In addition to facilitating anonymous communication online, Tor is an access point to the ‘dark Web’, vast reaches of the Internet that are intentionally kept hidden and don’t show up in Google or other search engines, […].”

As references to the “dark web”, the “deep web”, or the “dark deep shady Knockturn Alley of the Internet” have been popping up more and more in the media over the past months, Katya wanted to come up with proper definitions of commonly misunderstood terms to reduce misinformation and FUD.

She summarized the result of the discussion in a new HowBigIsTheDarkWeb wiki page. Be sure to point it to your fellow journalists!

Miscellaneous news

To follow up on last week’s Tor Weekly News coverage, Philipp Winter wrote a blog post to explain what the “Spoiled Onions” paper means for Tor users.

Thanks to Sukhbir Singh, users with email addresses can now request bridges and bundles via email.

Karsten Loesing dug some statistics about the Tor Weather service. There are currently 1846 different email addresses subscribed for 2349 Tor relays.

Tor developers will be present at the Mozilla booth during FOSDEM’14 . Drop by if you have questions or want to get involved in Tor!

Tor help desk roundup

Users repeatedly contact Tor help desk about unreachable hidden services. If that happens, please first make sure the system clock is accurate and try to visit the hidden service for the Tor Project’s website (idnxcnkne4qt76tg.onion). If it works, it means that Tor is working as it should and there’s nothing more the Tor Project can do. Hidden services are solely under the responsibility of their operators and they are the only one that can do something when a hidden service goes offline.

News from Tor StackExchange

Alex Ryan has been experiencing crashes of his relay running on a Raspberry Pi due to circuit creation storms. He found out that the problem disappeared after upgrading to the new 0.2.4 series of Tor. There are currently no official Raspbian packages, so users will have to build the package manually from source.

User cypherpunks wanted to know how to report security issues to the Tor Project. Until a proper process is decided, the best way at the moment is to contact Nick Mathewson, Andrea Shepard, or Roger Dingledine privately using their GnuPG keys.

How many hidden services can be served from a single Tor instance? Syrian Watermelon is looking to know if there is a hard limit and how memory usage will go. The question is still open and has attracted some interest from other users.

This issue of Tor Weekly News has been assembled by Lunar, George Kadianakis, qbi, Karsten Loesing and dope457.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Tor Browser 3.5.1 is released

The 3.5.1 release of the Tor Browser Bundle is now available on the Download page. You can also download the bundles directly from the distribution directory.

Please see the FAQ listing for any issues you may have before contacting support or filing tickets.

This release features an update to OpenSSL to fix a denial of service condition, and to fix the NoScript whitelist to remove

This release also features Tor, as well as a support for screen readers for the blind on Windows.

Here is the list of changes since 3.5.1. The 3.x ChangeLog is also available.

  • All Platforms
    • Bug 10447: Remove SocksListenAddress to allow multiple socks ports.
    • Bug 10464: Remove from NoScript whitelist
    • Bug 10537: Build an Arabic version of TBB 3.5
    • Update Torbutton to
      • Bug 9486: Clear NoScript Temporary Permissions on New Identity
      • Include Arabic translations
    • Update Tor Launcher to
      • Include Arabic translations
    • Update Tor to
    • Update OpenSSL to 1.0.1f
    • Update NoScript to
    • Update HTTPS-Everywhere to 3.4.5
  • Windows
    • Bug 9259: Enable Accessibility (screen reader) support
  • Mac
    • misc: Update bundle version field in Info.plist (for MacUpdates service)

New Tor Denial of Service Attacks and Defenses

New work on denial of service in Tor will be presented at NDSS '14 on Tuesday, February 25th, 2014:

The Sniper Attack: Anonymously Deanonymizing and Disabling the Tor Network
by Rob Jansen, Florian Tschorsch, Aaron Johnson, and Björn Scheuermann
To appear at the 21st Symposium on Network and Distributed System Security

A copy of the published paper is now available, as are my slides explaining the attacks in both PPTX and PDF formats.

We found a new vulnerability in the design of Tor's flow control algorithm that can be exploited to remotely crash Tor relays. The attack is an extremely low resource attack in which an adversary's bandwidth may be traded for a target relay's memory (RAM) at an amplification rate of one to two orders of magnitude. Ironically, the adversary can use Tor to protect it's identity while attacking Tor without significantly reducing the effectiveness of the attack.

We studied relay availability under the attack using Shadow, a discrete-event network simulator that runs the real Tor software in a safe, private testing environment, and found that we could disable each of the fastest guard and the fastest exit relay in a range of 1-18 minutes (depending on relay RAM capacity). We also found that the entire group of the top 20 exit relays, representing roughly 35% of Tor bandwidth capacity at the time of the analysis, could be disabled in a range of 29 minutes to 3 hours and 50 minutes. We also analyzed how the attack could potentially be used to deanonymize hidden services, and found that it would take between 4 and 278 hours before the attack would succeed (again depending on relay RAM capacity, as well as the bandwidth resources used to launch the attack).

Due to our devastating findings, we also designed three defenses that mitigate our attacks, one of which provably renders the attack ineffective. Defenses have been implemented and deployed into the Tor software to ensure that the Tor network is no longer vulnerable as of Tor version and later. Some of that work can be found in Trac tickets #9063, #9072, #9093, and #10169.

In the remainder of this post I will detail the attacks and defenses we analyzed, noting again that this information is presented more completely (and more elegantly) in our paper.

The Tor Network Infrastructure

The Tor network is a distributed system made up of thousands of computers running the Tor software that contribute their bandwidth, memory, and computational resources for the greater good. These machines are called Tor relays, because their main task is to forward or relay network traffic to another entity after performing some cryptographic operations. When a Tor user wants to download some data using Tor, the user's Tor client software will choose three relays from those available (an entry, middle, and exit), form a path or circuit between these relays, and then instruct the third relay (the exit) to fetch the data and send it back through the circuit. The data will get transferred from its source to the exit, from the exit to the middle, and from the middle to the entry before finally making its way to the client.

Flow Control

The client may request the exit to fetch large amounts of data, and so Tor uses a window-based flow control scheme in order to limit the amount of data each relay needs to buffer in memory at once. When a circuit is created, the exit will initialize its circuit package counter to 1000 cells, indicating that it is willing to send 1000 cells into the circuit. The exit decrements the package counter by one for every data cell it sends into the circuit (to the middle relay), and stops sending data when the package counter reaches 0. The client at the other end of the circuit keeps a delivery counter, and initializes it to 0 upon circuit creation. The client increments the delivery counter by 1 for every data cell it receives on that circuit. When the client's delivery counter reaches 100, it sends a special Tor control cell, called a SENDME cell, to the exit to signal that it received 100 cells. Upon receiving the SENDME, the exit adds 100 to its package counter and continues sending data into the circuit.

This flow control scheme limits the amount of outstanding data that may be in flight at any time (between the exit and the client) to 1000 cells, or about 500 KiB, per circuit. The same mechanism is used when data is flowing in the opposite direction (up from the client, through the entry and middle, and to the exit).

The Sniper Attack

The new Denial of Service (DoS) attack, which we call "The Sniper Attack", exploits the flow control algorithm to remotely crash a victim Tor relay by depleting its memory resources. The paper presents three attacks that rely on the following two techniques:

  1. the attacker stops reading from the TCP connection containing the attack circuit, which causes the TCP window on the victim's outgoing connection to close and the victim to buffer up to 1000 cells; and
  2. the attacker causes cells to be continuously sent to the victim (exceeding the 1000 cell limit and consuming the victim's memory resources) either by ignoring the package window at packaging end of the circuit, or by continuously sending SENDMEs from the delivery end to the packaging end even though no cells have been read by the delivery end.

I'll outline the attacks here, but remember that the paper and slides provide more details and useful illustrations of each of the three versions of the attack.

Basic Version 1 (attacking an entry relay)

In basic version 1, the adversary controls the client and the exit relay, and chooses a victim for the entry relay position. The adversary builds a circuit through the victim to her own exit, and then the exit continuously generates and sends arbitrary data through the circuit toward the client while ignoring the package window limit. The client stops reading from the TCP connection to the entry relay, and the entry relay buffers all data being sent by the exit relay until it is killed by its OS out-of-memory killer.

Basic Version 2 (attacking an exit relay)

In basic version 2, the adversary controls the client and an Internet destination server (e.g. website), and chooses a victim for the exit relay position. The adversary builds a circuit through the victim exit relay, and then the client continuously generates and sends arbitrary data through the circuit toward the exit relay while ignoring the package window limit. The destination server stops reading from the TCP connection to the exit relay, and the exit relay buffers all data being sent by the client until it is killed by its OS out-of-memory killer.

Efficient Version

Both of the basic versions of the attack above require the adversary to generate and send data, consuming roughly the same amount of upstream bandwidth as the victim's available memory. The efficient version reduces this cost by one to two orders of magnitude.

In the efficient version, the adversary controls only a client. She creates a circuit, choosing the victim for the entry position, and then instructs the exit relay to download a large file from some external Internet server. The client stops reading on the TCP connection to the entry relay, causing it to buffer 1000 cells.

At this point, the adversary may "trick" the exit relay into sending more cells by sending it a SENDME cell, even though the client has not actually received any cells from the entry. As long as this SENDME does not increase the exit relay's package counter to greater than 1000 cells, the exit relay will continue to package data from the server and send it into the circuit toward the victim. If the SENDME does cause the exit relay's package window to exceed the 1000 cell limit, it will stop responding on that circuit. However, the entry and middle node will hold the circuit open until the client issues another command, meaning its resources will not be freed.

The bandwidth cost of the attack after circuit creation is simply the bandwidth cost of occasionally sending a SENDME to the exit. The memory consumption speed depends on the bandwidth and congestion of non-victim circuit relays. We describe how to parallelize the attack using multiple circuits and multiple paths with diverse relays in order to draw upon Tor's inherent resources. We found that with roughly 50 KiB/s of upstream bandwidth, an attacker could consume the victim's memory at roughly 1 MiB/s. This is highly dependent on the victim's bandwidth capabilities: relays that use token buckets to restrict bandwidth usage will of course bound the attack's consumption rate.

Rather than connecting directly to the victim, the adversary may instead launch the attack through a separate Tor circuit using a second client instance and the "Socks4Proxy" or "Socks5Proxy" option. In this case, she may benefit from the anonymity that Tor itself provides in order to evade detection. We found that there is not a significant increase in bandwidth usage when anonymizing the attack in this way.


A simple but naive defense against the Sniper Attack is to have the guard node watch its queue length, and if it ever fills to over 1000 cells, kill the circuit. This defense does not prevent the adversary from parallelizing the attack by using multiple circuits (and then consuming 1000 cells on each), which we have shown to be extremely effective.

Another defense, called "authenticated SENDMEs", tries to protect against receiving a SENDME from a node that didn't actually receive 100 cells. In this approach, a 1 byte nonce is placed in every 100th cell by the packaging end, and that nonce must be included by the delivery end in the SENDME (otherwise the packaging end rejects the SENDME as inauthentic). As above, this does not protect against the parallel attack. It also doesn't defend against either of the basic attacks where the adversary controls the packaging end and ignores the SENDMEs anyway.

The best defense, as we suggested to the Tor developers, is to implement a custom, adaptive out-of-memory circuit killer in application space (i.e. inside Tor). The circuit killer is only activated when memory becomes scarce, and then it chooses the circuit with the oldest front-most cell in its circuit queue. This will prevent the Sniper Attack by killing off all of the attack circuits.

With this new defense in place, the next game is for the adversary to try to cause Tor to kill an honest circuit. In order for an adversary to cause an honest circuit to get killed, it must ensure that the front-most cell on its malicious circuit queue is at least slightly "younger" than the oldest cell on any honest queue. We show that the Sniper Attack is impractical with this defense: due to fairness mechanisms in Tor, the adversary must spend an extraordinary amount of bandwidth keeping its cells young — bandwidth that would likely be better served in a more traditional brute-force DoS attack.

Tor has implemented a version of the out-of-memory killer for circuits, and is currently working on expanding this to channel and connection buffers as well.

Hidden Service Attack and Countermeasures

The paper also shows how the Sniper Attack can be used to deanonymize hidden services:

  1. run a malicious entry guard relay;
  2. run the attack from Oakland 2013 to learn the current guard relay of the target hidden service;
  3. run the Sniper Attack on the guard from step 2, knocking it offline and causing the hidden service to choose a new guard;
  4. repeat, until the hidden service chooses the relay from step 1 as its new entry guard.

The technique to verify that the hidden service is using a malicious guard in step 4 is the same technique used in step 2.

In the paper, we compute the expected time to succeed in this attack while running malicious relays of various capacities. It takes longer to succeed against relays that have more RAM, since it relies on the Sniper Attack to consume enough RAM to kill the relay (which itself depends on the bandwidth capacity of the victim relay). For the malicious relay bandwidth capacities and honest relay RAM amounts used in their estimate, we found that deanonymization would involve between 18 and 132 Sniper Attacks and take between ~4 and ~278 hours.

This attack becomes much more difficult if the relay is rebooted soon after it crashes, and the attack is ineffective when Tor relays are properly defending against the Sniper Attack (see the "Defenses" section above).

Strategies to defend hidden services in particular go beyond those suggested here to include entry guard rate-limiting, where you stop building circuits if you notice that your new guards keep going down (failing closed), and middle guards, guard nodes for your guard nodes. Both of these strategies attempt to make it harder to coerce the hidden service into building new circuits or exposing itself to new relays, since that is precisely what is needed for deanonymization.

Open Problems

The main defense implemented in Tor will start killing circuits when memory gets low. Currently, Tor uses a configuration option (MaxMemInCellQueues) that allows a relay operator to configure when the circuit-killer should be activated. There is likely not one single value that makes sense here: if it is too high, then relays with lower memory will not be protected; if it is too low, then there may be more false positives resulting in honest circuits being killed. Can Tor determine this setting in an OS-independent way that allows relays to automatically find the right value for MaxMemInCellQueues?

The defenses against the Sniper Attack prevent the adversary from crashing the victim relay, but the adversary may still consume a relay's bandwidth (and memory resources, to a critical level) at relatively low cost. This means that even though the Sniper Attack can no longer kill a relay, it can still consume a large amount of its bandwidth at a relatively low cost (similar to more traditional bandwidth amplification attacks). More analysis of general bandwidth consumption attacks and defenses remains a useful research problem.

Finally, hidden services also need some love. More work is needed to redesign them in a way that does not allow a client to cause the hidden service to choose new relays on demand.

What the "Spoiled Onions" paper means for Tor users

Together with Stefan, I recently published the paper "Spoiled Onions: Exposing Malicious Tor Exit Relays". The paper only discusses our results and how we obtained them and we don't talk a lot about the implications for Tor users. This blog post should fill that gap.

First, it's important to understand that 25 relays in four months isn't a lot. It is ultimately a very small fraction of the Tor network. Also, it doesn't mean that 25 out of 1,000 relays are malicious or misconfigured (we weren't very clear on that in the paper). We have yet to calculate the churn rate of exit relays which is the rate at which relays join and leave the network. 1,000 is really just the approximate number of exit relays at any given point in time. So the actual number of exit relays we ended up testing in four months is certainly higher than that. As a user, that means that you will not see many malicious relays "in the wild".

Second, Tor clients select relays in their circuits based on the bandwidth they are contributing to the network. Faster relays see more traffic than slower relays which balances the load in the Tor network. Many of the malicious exit relays contributed relatively little bandwidth to the Tor network which makes them quite unlikely to be chosen as relay in a circuit.

Third, even if your traffic is going through a malicious exit relay, it doesn't mean that everything is lost. Many of the attacks we discovered still caused Firefox' infamous "about:certerror" warning page. As a vigilant user, you would notice that something isn't quite right and hopefully leave the site. In addition, TorBrowser ships with HTTPS-Everywhere which by default attempts to connect to some sites over HTTPS even though you just typed "http://". After all, as we said in the past, "Plaintext over Tor is still plaintext".

Finally, we want to point out that all of these attacks are of course not limited to the Tor network. You face the very same risks when you are connecting to any public WiFi network. One of the fundamental problems is the broken CA system. Do you actually know all the ~50 organisation who you implicitly trust when you start your Firefox, Chrome, or TorBrowser? Making the CA system more secure is a very challenging task for the entire Internet and not just the Tor network.

Tor Weekly News — January 22th, 2014

Welcome to the third issue in 2014 of Tor Weekly News, the weekly newsletter that covers what is happening in the Tor community.

Future of the geolocalization database used in Tor software

The first version of Tor to include an IP-to-country database was, released in 2008. In 2010, the database switched from data provided by WebHosting.Info to use the more up-to-date MaxMind’s GeoLite service. All was good, until two years later when MaxMind started to hide the country of Tor relays, marking them as from the “A1” country, standing for “anonymous proxy”. Karsten Loesing has been tirelessly doing manual database updates ever since.

MaxMind has launched GeoIP2 as a successor of its previous service. The very good news, as spotted by Karsten, is that the new format “provide the A1/A2 information in *addition* to the correct country codes”.

The question lies on how should this new database be integrated into the different software using geolocalization information: Tor, BridgeDB, the metrics database and the metrics website. The format used by Tor so far has always been a custom format, so writing a converter from MaxMind’s database format is one option. Another option is to integrate the parsing libraries provided by MaxMind into Tor software.

Both approaches have their advantages. In any cases, they can be useful, fun and small projects for someone new to the Tor community. Be sure to have a look at Karsten’s suggestions if you feel like helping.

Key generation on headless and diskless relays

Following up on his work on Torride — a live Linux distribution meant to run Tor relays — anarcat asked about key generation in low entropy situation. Lunar had raised a similar question for the Tor-ramdisk distribution a couple of months ago.

“The concern here is what happens when Tor starts up the first time. I believe it creates a public/private key pair for its cryptographic routines. In Torride, this is done right on the start of the operating system, when the entropy of the system is low or inexistent” explained anarcat.

Gerardus Hendricks has made a quick analysis of Tor source code to determine that key were generated using entropy from /dev/urandom — an insecure behavior in low entropy situation.

Nick Mathewson suggested to change the initialization procedure in order to “try to read a byte from /dev/random before it starts Tor, and block until it actually can read that byte.“ This would “ensure that the kernel RNG has (by its own lights) reached full entropy at least once, which guarantees cryptographic quality of the rest of the /dev/urandom stream.” More general solutions are now discussed in a newly created ticket.

Exposing malicious exit relays

Anyone is free to start a new Tor relay and join the Tor network. Most Tor relay operators are volunteers who dedicate time and money to support online privacy.

Unfortunately, as Philipp Winter and Stefan Lindskog wrote in the introduction of their new research project, “there are exceptions: in the past, some exit relays were documented to have sniffed and tampered with relayed traffic”. The project, dubbed “spoiled onions”, is meant to “monitoring all exit relays for several months in order to expose, document, and thwart malicious or misconfigured relays”.

The paper gives more details on the modular scanning software that has been developed. It elaborates on how it can detect tampering with the HTTP, HTTPS, SSH, and DNS protocols. The paper also discusses that occasionally it’s the relay’s ISP that is responsible for an attack despite the operator’s good faith.

The authors also describe an extension to the Tor Browser that can help with detecting HTTPS man-in-the-middle attacks: if the browser is unable to verify a certificate, it will automatically retrieve the certificate again using a different Tor exit node. If the certificates do not match, a warning can then be issued informing the user that an attack might be happening and offering to notify the Tor Project. However, the extension is merely a proof of concept and not usable at this point.

Philipp and Stefan’s efforts have already identified 25 bad relays that have subsequently been marked as such by directory authority operators. Even if we wish the number of problematic relays to stay low, let’s hope this will help to identify those who try to abuse Tor users as soon as possible in the future.

Miscellaneous news

Alex reported his bad experience with Hetzner when attempting to participate in the “Trusted Tor Traceroutes” experiment. Paul Görgen reported having similar troubles, even with a lower packet per second rate. Relay operators might want to warn their ISP before undertaking the experiment in the future to avoid similar misadventures.

Anupam Das reported that they have “received a good rate of participation by relay operators to our measurement project”. To measure progress, there is now a live scoreboard of all participants.

The integration of “pluggable transports” in the main Tor Browser Bundle is moving smoothly. David Fifield published beta images of his recent work, and the initial implementation adding a default set of bridges to Tor Launcher has been completed.

Following up on last week call for help regarding Tor Weather, Karsten Loesing is organizing an IRC meeting with interested developers on Wed, Jan 22, 18:00 UTC. The meeting will happen in #tor-dev on OFTC.

As part of the website redesign effort, Marck Al proposed an updated visual identity. Lunar also highlighted a couple of tasks that could be undertaken to move the website redesign forward.

Tails’ release calendar has been shifted by two weeks because of the holiday break from Mozilla.

Ximin Luo has been discussing with I2P developers on how Pluggable Transports could be made easier to use by other projects.

Isis Lovecruft has sent late reports on her activity for October, November and December 2013.

There are two weeks left to participate in the crowdfunding campaign started by the Freedom of the Press Foundation. Among other projects, the money will support core Tor development and Tails 1.0 release.

Tor help desk roundup

Frequently users email the Tor help desk because they cannot access a particular public-facing website. Often this is because an increasing number of websites have begun blocking connections that appear to come from the Tor network. A partial list of websites that do this can be found on Tor Project’s wiki. Feel free to add more sites to the list, and to contact the website’s operators to explain why banning Tor is not the best course of action.

Some users reported websites that do not allow logins when using the Tor Browser. This is not always related to website blocks or blacklists. There is a known bug in the Tor Browser Bundle such that Private Browsing Mode disallows cookies in a way that some sites don’t like. Disabling Private Browsing mode via Torbutton’s Preferences is a workaround and will hopefully be fixed soon.

This issue of Tor Weekly News has been assembled by Lunar, Matt Pagan, Philipp Winter, Karsten Loesing, Sandeep, and dope457.

Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!

Syndicate content Syndicate content