Blogs

Tor 0.2.2.34 is released (security patches)

Tor 0.2.2.34 fixes a critical anonymity vulnerability where an attacker
can deanonymize Tor users. Everybody should upgrade.

The attack relies on four components:

  • 1) Clients reuse their TLS cert when talking to different relays, so relays can recognize a user by the identity key in her cert.
  • 2) An attacker who knows the client's identity key can probe each guard relay to see if that identity key is connected to that guard relay right now.
  • 3) A variety of active attacks in the literature (starting from "Low-Cost Traffic Analysis of Tor" by Murdoch and Danezis in 2005) allow a malicious website to discover the guard relays that a Tor user visiting the website is using.
  • 4) Clients typically pick three guards at random, so the set of guards for a given user could well be a unique fingerprint for her. This release fixes components #1 and #2, which is enough to block the attack; the other two remain as open research problems.

Special thanks to "frosty_un" for reporting the issue to us! (As far as we know, this has nothing to do with any claimed attack currently getting attention in the media.)

Clients should upgrade so they are no longer recognizable by the TLS certs they present. Relays should upgrade so they no longer allow a remote attacker to probe them to test whether unpatched clients are currently connected to them.

This release also fixes several vulnerabilities that allow an attacker to enumerate bridge relays. Some bridge enumeration attacks still remain; see for example proposal 188.

https://torproject.org/download/download-easy

Changes in version 0.2.2.34 - 2011-10-26

Privacy/anonymity fixes (clients):

  • Clients and bridges no longer send TLS certificate chains on outgoing OR
    connections. Previously, each client or bridge would use the same cert chain
    for all outgoing OR connections until its IP address changes, which allowed any
    relay that the client or bridge contacted to determine which entry guards it is
    using. Fixes CVE-2011-2768. Bugfix on 0.0.9pre5; found by "frosty_un".
  • If a relay receives a CREATE_FAST cell on a TLS connection, it no longer
    considers that connection as suitable for satisfying a circuit EXTEND request.
    Now relays can protect clients from the CVE-2011-2768 issue even if the clients
    haven't upgraded yet.
  • Directory authorities no longer assign the Guard flag to relays that
    haven't upgraded to the above "refuse EXTEND requests to client connections"
    fix. Now directory authorities can protect clients from the CVE-2011-2768 issue
    even if neither the clients nor the relays have upgraded yet. There's a new
    "GiveGuardFlagTo_CVE_2011_2768_VulnerableRelays" config option to let us
    transition smoothly, else tomorrow there would be no guard relays.

Privacy/anonymity fixes (bridge enumeration):

  • Bridge relays now do their directory fetches inside Tor TLS connections,
    like all the other clients do, rather than connecting directly to the DirPort
    like public relays do. Removes another avenue for enumerating bridges. Fixes
    bug 4115; bugfix on 0.2.0.35.
  • Bridges relays now build circuits for themselves in a more similar way to
    how clients build them. Removes another avenue for enumerating bridges. Fixes
    bug 4124; bugfix on 0.2.0.3-alpha, when bridges were introduced.
  • Bridges now refuse CREATE or CREATE_FAST cells on OR connections that they
    initiated. Relays could distinguish incoming bridge connections from client
    connections, creating another avenue for enumerating bridges. Fixes
    CVE-2011-2769. Bugfix on 0.2.0.3-alpha. Found by "frosty_un".

Major bugfixes:

  • Fix a crash bug when changing node restrictions while a DNS lookup is
    in-progress. Fixes bug 4259; bugfix on 0.2.2.25-alpha. Bugfix by "Tey'".
  • Don't launch a useless circuit after failing to use one of a hidden
    service's introduction points. Previously, we would launch a new introduction
    circuit, but not set the hidden service which that circuit was intended to
    connect to, so it would never actually be used. A different piece of code would
    then create a new introduction circuit correctly. Bug reported by katmagic and
    found by Sebastian Hahn. Bugfix on 0.2.1.13-alpha; fixes bug 4212.

Minor bugfixes:

  • Change an integer overflow check in the OpenBSD_Malloc code so that GCC is
    less likely to eliminate it as impossible. Patch from Mansour Moufid. Fixes bug
    4059.
  • When a hidden service turns an extra service-side introduction circuit into
    a general-purpose circuit, free the rend_data and intro_key fields first, so we
    won't leak memory if the circuit is cannibalized for use as another
    service-side introduction circuit. Bugfix on 0.2.1.7-alpha; fixes bug
    4251.
  • Bridges now skip DNS self-tests, to act a little more stealthily. Fixes
    bug 4201; bugfix on 0.2.0.3-alpha, which first introduced bridges. Patch by
    "warms0x".
  • Fix internal bug-checking logic that was supposed to catch failures in
    digest generation so that it will fail more robustly if we ask for a
    nonexistent algorithm. Found by Coverity Scan. Bugfix on 0.2.2.1-alpha; fixes
    Coverity CID 479.
  • Report any failure in init_keys() calls launched because our IP address has
    changed. Spotted by Coverity Scan. Bugfix on 0.1.1.4-alpha; fixes CID 484.

Minor bugfixes (log messages and documentation):

  • Remove a confusing dollar sign from the example fingerprint in the man
    page, and also make the example fingerprint a valid one. Fixes bug 4309; bugfix
    on 0.2.1.3-alpha.
  • The next version of Windows will be called Windows 8, and it has a major
    version of 6, minor version of 2. Correctly identify that version instead of
    calling it "Very recent version". Resolves ticket 4153; reported by
    funkstar.
  • Downgrade log messages about circuit timeout calibration from "notice" to
    "info": they don't require or suggest any human intervention. Patch from Tom
    Lowenthal. Fixes bug 4063; bugfix on 0.2.2.14-alpha.

Minor features:

  • Turn on directory request statistics by default and include them in
    extra-info descriptors. Don't break if we have no GeoIP database. Backported
    from 0.2.3.1-alpha; implements ticket 3951.
  • Update to the October 4 2011 Maxmind GeoLite Country database.

Rumors of Tor's compromise are greatly exaggerated

There are two recent stories claiming the Tor network is compromised. It seems it is easier to get press than to publish research, work with us on the details, and propose solutions. Our comments here are based upon the same stories you are reading. We have no insider information.

The first story has been around 'Freedom Hosting' and their hosting of child abuse materials as exposed by Anonymous Operation Darknet. We're reading the press articles, pastebin urls, and talking to the same people as you. It appears 'Anonymous' cracked the Apache/PHP/MySQL setup at Freedom Hosting and published some, or all, of their users in the database. These sites happened to be hosted on a Tor hidden service. Further, 'Anonymous' used a somewhat recent RAM-exhaustion denial of service attack on the 'Freedom Hosting' Apache server. It's a simple resource starvation attack that can be conducted over low bandwidth, low resource requirement connections to individual hosts. This isn't an attack on Tor, but rather an attack on some software behind a Tor hidden service. This attack was discussed in a thread on the tor-talk mailing list starting October 19th.

The second story is around Eric Filiol's claims of compromising the Tor network leading up to his Hackers to Hackers talk in Brazil in a few days. This claim was initially announced by some French websites; however, it has spread further, such as this Hacker News story.

Again, the tor-talk mailing list had the first discussions of these attacks back on October 13th. To be clear, neither Eric nor his researchers have disclosed anything about this attack to us. They have not talked to us, nor shared any data with us — despite some mail exchanges where we reminded him about the phrase "responsible disclosure".

Here's the attack as we understand it, from reading the various press reports:

They enumerated 6000 IP addresses that they think are Tor relays. There aren't that many Tor relays in the world — 2500 is a more accurate number. We're not sure what caused them to overcount so much. Perhaps they watched the Tor network over a matter of weeks and collected a bunch of addresses that aren't relays anymore? The set of relays is public information, so there's no reason to collect your own list and certainly no reason to end up with a wrong list.

One-third of the machines on those IP addresses are vulnerable to operating system or other system level attacks, meaning he can break in. That's quite a few! We wonder if that's true with the real Tor network, or just their simulated one? Even ignoring the question of what these 3500 extra IP addresses are, it's important to remember that one-third by number is not at all the same as one-third by capacity: Tor clients load-balance over relays based on the relay capacity, so any useful statement should be about how much of the capacity of the Tor network is vulnerable. It would indeed be shocking if one-third of the Tor network by capacity is vulnerable to external attacks.

(There's also an aside about enumerating bridges. They say they found 181 bridges, and then there's a quote saying they "now have a complete picture of the topography of Tor", which is a particularly unfortunate time for that quote since there are currently around 600 bridges running.)

We expect the talk will include discussion about some cool Windows trick that can modify the crypto keys in a running Tor relay that you have local system access to; but it's simpler and smarter just to say that when the attacker has local system access to a Tor relay, the attacker controls the relay.

Once they've broken into some relays, they do congestion attacks like packet spinning to congest the relays they couldn't compromise, to drive users toward the relays they own. It's unclear how many resources are needed to keep the rest of the relays continuously occupied long enough to keep the user from using them. There are probably some better heuristics that clients can use to distinguish between a loaded relay and an unavailable relay; we look forward to learning how well their attack here actually worked.

From there, the attack gets vague. The only hint we have is this nonsense sentence from the article:

The remaining flow can then be decrypted via a fully method of attack called "to clear unknown" based on statistical analysis.

Do they have a new attack on AES, or on OpenSSL's implementation of it, or on our use of OpenSSL? Or are they instead doing some sort of timing attack, where if you own the client's first hop and also the destination you can use statistics to confirm that the two flows are on the same circuit? There's a history of confused researchers proclaiming some sort of novel active attack when passive correlation attacks are much simpler and just as effective.

So the summary of the attack might be "take control of the nodes you can, then congest the other ones so your targets avoid them and use the nodes you control. Then do some unspecified magic crypto attack to defeat the layers of encryption for later hops in the circuit." But really, these are just guesses based on the same news articles you're reading. We look forwarding to finding out if there's actually an attack we can fix, or if they are just playing all the journalists to get attention.

More generally, there are two broader lessons to remember here. First, research into anonymity-breaking attacks is how the field moves forward, and using Tor for your target is common because a) it's resistant to all the simpler attacks and b) we make it really easy to do your research on. And second, remember that most other anonymity systems out there fall to these attacks so quickly and thoroughly that no researchers even talk about it anymore. For some recent examples, see the single-hop proxy discussions in How Much Anonymity does Network Latency Leak? and Website Fingerprinting in Onion Routing Based Anonymization Networks.

I thank Roger, Nick, and Runa for helping with this post.

Plain Vidalia Bundles to be Discontinued (Don't Panic!)

Over the past few years, Tor has gotten more popular and has had to grow and change to accommodate a highly varied userbase. One aspect of this is getting the software into users' hands and having it immediately do what they want it to, while also not allowing them to inadvertently deanonymize themselves because they missed a configuration step or didn't understand which applications were using Tor and which were not. As a result, we have standardized on the Tor Browser Bundle for all platforms and are currently promoting it as our only fully supported client experience.

Since the Tor Browser Bundle offers the best current protection, we are moving to a client/server model for packages, and consequently the "plain" Vidalia bundles will be discontinued by the end of the year and no longer recommended for client usage. We've started rolling out server Vidalia bundles for Windows, which you can test by going to the download page.

There are currently (and will continue to be) three types of server bundles available:

  • Bridge-by-default Vidalia bundle
  • This configures Tor to act as a bridge by default, so as soon as you install it and run it, you will be helping censored users reach the Tor network. You can read more about bridges here. This bundle still includes Torbutton and Polipo, but those will be removed in the next release (date to be determined).

  • Relay-by-default Vidalia bundle
  • This configures Tor to run as a non-exit relay by default. This means you will serve as either a guard or middle node and help grow the size of the Tor network. You can read more about Tor relay configuration here.

  • Exit-by-default Vidalia bundle
  • This configures Tor to run as an exit relay by default. Exit nodes are special, as they allow traffic to exit from the Tor network to the plain internet, and anyone who has not already looked into the risks associated with running an exit relay should read our tips for running an exit node with minimal harassment

We've started creating a Tor Browser Bundle FAQ, but we'd like to hear your concerns so we can provide answers where necessary, documentation for alternative setups, and fix the software where answers are insufficient. We have several months before Tor Browser Bundle is the only option, so please help us make it as good as possible! If you have bugs to file, please don't file them in the blog comments -- use our bug tracker for that.

Trip report, Arab Bloggers Meeting, Oct 3-7

Jake, Arturo, and I went to Tunisia Oct 3-7 to teach a bunch of bloggers from Arab countries about Tor and more generally about Internet security and privacy. The previous meetings were in Lebanon; it's amazing to reflect that the world has changed enough that Sami can hold it in his home country now.

The conference was one day of keynotes with lots of press attention, and then three days of unconference-style workshops.

On the keynote day, Jake and Arturo did a talk on mobile privacy, pointing out the wide variety of ways that the telephone network is "the best surveillance tool ever invented". The highlight for the day was when Moez Chakchouk, the head of the Tunisian Internet Agency (ATI), did a talk explicitly stating that Tunisia had been using Smartfilter since 2002, that Smartfilter had been giving Tunisia discounts in exchange for beta-testing their products for other countries in the region like Saudi Arabia, and that it was time for Tunisia to stop wasting money on expensive filters that aren't good for the country anyway.

We did a four-hour Tor training on the first workshop day. We covered what to look for in a circumvention or privacy tool (open source good, open design good, open analysis of security properties good, centralization bad). All the attendees left with a working Tor Browser Bundle install (well, all the attendees except the fellow with the ipad). We got many of them to install Pidgin and OTR as well, but ran into some demo bugs around the Jabber connect server config that derailed some users. I look forward to having the Tor IM Browser Bundle back in action now that we've fixed some Pidgin security bugs.

We did a three-hour general security and privacy Q&A on the second workshop day, covering topics like whether Skype is safe, how else can they do VoIP, how can they trust various software, a demo of what sniffing the network can show, iphone vs android vs blackberry, etc. It ended with a walk-through of how *we* keep our laptops secure, so people could see how far down the rabbit hole they can go.

Syria and Israel seem to be the scariest adversaries in the area right now, in terms of oppression technology and willingness to use it. Or said another way, if you live in Syria or Palestine, you are especially screwed. We heard some really sad and disturbing stories; but those stories aren't mine to tell here.

We helped to explain the implications of the 54 gigs of Bluecoat logs that got published from inside Syria, detailing URLs and the IP addresses that fetched them. (The IP addresses were scrubbed from the published version of the logs, but the URLs, user agents, timestamps, etc still contain quite sensitive info.)
http://advocacy.globalvoicesonline.org/2011/10/10/bluecoat-us-technology...

Perhaps most interesting in the Bluecoat logs is the evidence of Bluecoat devices phoning home to get updates. So much for Bluecoat's claims that they don't provide support to Syria. If the US government chose to enforce its existing laws against American companies selling surveillance tools to Syria, it would be a great step toward making Tor users safer in Syria right now: no doubt Syria has some smart people who can configure things locally, but it's way worse when Silicon Valley engineers provide new filter rules to detect protocols like Tor for no additional charge.

The pervasiveness of video cameras and journalists at the meeting was surprising. I'm told the previous Arab blogger meeting was just a bunch of geeks sitting around their laptops talking about how to improve their countries and societies. Now that the Twitter Revolution is hot in the press, I guess all the Western media now want a piece of the action.

On the third workshop day we learned that there was a surveillance corporate expo happening in the same hotel as the blogger meeting. We crashed it and collected some brochures. We also found a pair of students from a nearby university who had set up a booth to single-handedly try to offset the evil of the expo. They were part of a security student group at their university that had made a magazine that talked among other things about Tor, Tunisian filtering, etc. We gave them a big pile of Tor stickers.

On our extra day after the workshops, we visited Moez at his Internet Agency and interviewed him for a few hours about the state of filtering in his country. He confirmed that they renewed their Smartfilter license until Sept 2012, and that they still filter "the groups that want it" (government and schools), but for technical reasons they have turned off the global filters (they broke and nobody has fixed them). We pointed out that since an external company operates their filters — including for their military — then that company not only has freedom to censor anything they want, but they also get to see every single request when deciding whether to censor it. Moez used the phrase "national sovereignty" when explaining why it isn't a great idea for Tunisia to outsource their filtering. Great point: it would be foolish to imagine that this external company isn't logging things for their own purposes, whether that's "improving their product" or something more sinister. As we keep seeing, collecting a large data set and then hoping to keep it secret never seems to work out.

One of the points Jake kept hammering on throughout the week was "if *anything* is being filtered, then you have to realize that they're surveilling *everything* in order to make those filtering decisions." The Syrian logs help to drive the point home but it seems like a lot of people haven't really internalized it yet. We still find people thinking of Tor solely as an "anti-filter" tool and not considering the surveillance angle.

After the meeting with Moez, we went to visit one of the universities. We talked to a few dozen students who were really excited to find us there — to the point that they quickly located a video camera and interviewed us on the spot. They brought us to their security class, and informed the professor that we would be speaking for the first half hour of it. We gave an impassioned plea for them to learn more about Tor and teach other people in their country how to be safe online. I think the group of students there could be really valuable for creating local technical Tor resources. As a bonus, the traditional path for a computer science graduate of this university is to go work at Tunisia Telecom, the monopoly telco that hosts the filtering boxes &mdash the more we can influence the incoming generations, the more the change will grow.

New Tor Browser Bundles

The Tor Browser Bundles have been updated to Vidalia 0.2.15. The OS X and Linux bundles have Torbutton 1.4.4, but a hotfix for Windows was released with 1.4.4.1 because 1.4.4 had a minor issue that prevented the Windows bundle from going to https://check.torproject.org.

https://www.torproject.org/download

Tor Browser Bundle (2.2.33-3)

  • Update Vidalia to 0.2.15
  • Update Torbutton to 1.4.4.1
  • Update NoScript to 2.1.4
  • Remove trailing dash from Windows version number (closes: #4160)
  • Make Tor Browser (Aurora) fail closed when not launched with a TBB profile
    (closes: #4192)

Vidalia 0.2.15 is out!

Hello everybody,

I'm happy to announce a new version for Vidalia, 0.2.15.

If you find any bugs or have ideas on how to improve Vidalia, please
remember to go to https://trac.torproject.org/ and file a ticket for it!

You can find the source tarball and its signature in here:
https://www.torproject.org/dist/vidalia/vidalia-0.2.15.tar.gz
https://www.torproject.org/dist/vidalia/vidalia-0.2.15.tar.gz.asc

TBB and other packages are going to be here soon, please be patient.

Here's what changed:

0.2.15 07-Oct-2011

  • Draw the bandwidth graph curves based on the local maximum, not
    the global maximum. Fixes bug 2188.
  • Add an option for setting up a non-exit relay to the Sharing
    configuration panel. This is meant to clarify what an exit policy
    and an exit relay are. Resolves bug 2644.
  • Display time statistics for bridges in UTC time, rather than local
    time. Fixes bug 3342.
  • Change the parameter for ordering the entries in the Basic Log
    list from currentTime to currentDateTime to avoid missplacing
    entries from different days.
  • Check the tor version and that settings are sanitized before
    trying to use the port autoconfiguration feature. Fixes bug 3843.
  • Provide a way to hide Dock or System Tray icons in OSX. Resolves
    ticket 2163.
  • Make new processes appear at front when they are started (OSX
    specific).

Support the Tor Network: Donate to Exit Node Providers

The Tor network is run by volunteers, and for the most part is entirely independent of the software development effort led by The Tor Project, Inc. While The Tor Project, Inc is a 501(c)3 non-profit that is happy to take donations to create more and better software, up until recently there was no way for you to fund deployment of more relays to improve network capacity and performance, aside from running those relays yourself.

We're happy to announce that both Noisebridge and TorServers.net are now able to take donations directly for the purpose of running high capacity Tor exit nodes.

Noisebridge is a US 501(c)3 non-profit, which means that for US citizens, donations are tax deductible. Torservers.net is a German non-profit organization whose donations are tax deductible for German citizens (and also potentially for citizens of other EU member states).

What are the pluses and minuses of donating as opposed to running your own relay? Glad you asked!

While it is relatively easy and risk-free to run a middle relay or a bridge, running an exit can be tough. You have to seek out a friendly ISP, explain Tor to them, and then navigate a laundry list of Internet bureaucracies to ensure that when abuse happens, the burden of answering complaints falls upon you and not your ISP.

These barriers are all made easier the larger your budget is. On top of this, like most things, bandwidth is cheaper in bulk. But not just Costco cheaper: exponential-growth cheaper, all the way up into the gigabit range (and perhaps beyond, but no one has run a Tor node on anything faster).

At these scales, large exit nodes can pay as little as $1/mo per dedicated megabit/second. Sometimes less. This means that adding $30/mo to the hosting budget of a large exit node can buy almost 40 times more full-duplex dedicated bandwidth than a similarly priced business upgrade to your home ADSL line would buy, and about 50 times more bandwidth than Amazon EC2 instances at the entry-level price of $0.08 per half-duplex gigabyte, not counting CPU costs. (Bridge economics in terms of IP address space availability might still favor Amazon EC2, but that is a different discussion).

The downside to donation is that network centralization can lead to a more fragile and a more observable network. If these nodes fail, the network will definitely feel the performance loss. In terms of observability, fewer nodes also means that fewer points of surveillance are required to deanonymize users (though some argue that more users will make such surveillance less reliable, no one has yet rigorously quantified that result against actual attacks).

Therefore, if you are able to run a high capacity relay or exit yourself (or have access to cheap/free/unused bandwidth at your work/school), running your own relay is definitely preferred. If you are part of the Tor community and want to accept donations, we'd love to add you to our recommended donor list. Please join us on the tor-relays mailing list to discuss node configuration and setup.

However, if configuring and maintaining a high capacity relay is not for you, donating a portion of the monthly hosting budgets of either of these organizations is an excellent way to support anonymity, privacy, and censorship circumvention for very large numbers of people.

September 2011 Progress Report

In September 2011 we made progress in a number of areas, such as handling issues in Iran's use of DPI to block tor, new versions of Tor, Tails 0.8 release, and more.

The PDF and plaintext versions of the report can be found attached to this blog post or at our monthly report archive:

https://archive.torproject.org/monthly-report-archive/2011-September-Mon...

and

https://archive.torproject.org/monthly-report-archive/2011-September-Mon...

Syndicate content Syndicate content