security advisory

Tor security advisory: "relay early" traffic confirmation attack

This advisory was posted on the tor-announce mailing list.

SUMMARY:

On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.

The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.

Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.

Relays should upgrade to a recent Tor release (0.2.4.23 or 0.2.5.6-alpha), to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.

THE TECHNICAL DETAILS:

We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:
https://blog.torproject.org/blog/one-cell-enough

The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.

The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.

So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.

There are three important points about this attack:

A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)

B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.

C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See https://blog.torproject.org/blog/one-cell-enough for more discussion.

Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)

In response, we've taken the following short-term steps:

1) Removed the attacking relays from the network.

2) Put out a software update for relays to prevent "relay early" cells from being used this way.

3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.

4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".

The following longer-term research areas remain:

5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.

6) Exploring better mechanisms, e.g. social connections, to limit the impact from a malicious set of relays. We've also formed a group to pay more attention to suspicious relays in the network:
https://blog.torproject.org/blog/how-report-bad-relays

7) Further reducing exposure to guards over time, perhaps by extending the guard rotation lifetime:
https://blog.torproject.org/blog/lifecycle-of-a-new-relay
https://blog.torproject.org/blog/improving-tors-anonymity-changing-guard...

8) Better understanding statistical traffic correlation attacks and whether padding or other approaches can mitigate them.

9) Improving the hidden service design, including making it harder for relays serving as hidden service directory points to learn what hidden service address they're handling:
https://blog.torproject.org/blog/hidden-services-need-some-love

OPEN QUESTIONS:

Q1) Was this the Black Hat 2014 talk that got canceled recently?
Q2) Did we find all the malicious relays?
Q3) Did the malicious relays inject the signal at any points besides the HSDir position?
Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?

Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.

OpenSSL bug CVE-2014-0160

A new OpenSSL vulnerability on 1.0.1 through 1.0.1f is out today, which can be used to reveal memory to a connected client or server.

If you're using an older OpenSSL version, you're safe.

Note that this bug affects way more programs than just Tor — expect everybody who runs an https webserver to be scrambling today. If you need strong anonymity or privacy on the Internet, you might want to stay away from the Internet entirely for the next few days while things settle.

Here are our first thoughts on what Tor components are affected:

  1. Clients: The browser part of Tor Browser shouldn't be affected, since it uses libnss rather than openssl. But the Tor client part is: Tor clients could possibly be induced to send sensitive information like "what sites you visited in this session" to your entry guards. If you're using TBB we'll have new bundles out shortly; if you're using your operating system's Tor package you should get a new OpenSSL package and then be sure to manually restart your Tor. [update: the bundles are out, and you should upgrade]
  2. Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you're at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay (but remember that Tor's multi-hop design means that attacking just one relay in the client's path is not very useful). In any case, best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys. (You will need to update your MyFamily torrc lines if you run multiple relays.) [update: we've cut the vulnerable relays out of the network]
  3. Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug, this shouldn't allow an attacker to identify the location of the hidden service [edit: if it's your entry guard that extracted your key, they know where they got it from]. Also, an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.
  4. Directory authorities: In addition to the keys listed in the "relays and bridges" section above, Tor directory authorities might leak their medium-term authority signing keys. Once you've updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don't rotate that yet. We'll see how this unfolds and try to think of a good solution there.
  5. Tails is still tracking Debian oldstable, so it should not be affected by this bug.
  6. Orbot looks vulnerable; they have some new packages available for testing.
  7. The webservers in the https://www.torproject.org/ rotation needed (and got) upgrades. Maybe we'll need to throw away our torproject SSL web cert and get a new one too.

Firefox security bug (proxy-bypass) in current TBBs

A user has discovered a severe security bug in Firefox related to websockets bypassing the SOCKS proxy DNS configuration. This means when connecting to a websocket service, your Firefox will query your local DNS resolver, rather than only communicating through its proxy (Tor) as it is configured to do. This bug is present in current Tor Browser Bundles (2.2.35-9 on Windows; 2.2.35-10 on MacOS and Linux).

To fix this dns leak/security hole, follow these steps:

  1. Type “about:config” (without the quotes) into the Firefox URL bar. Press Enter.
  2. Type “websocket” (again, without the quotes) into the search bar that appears below "about:config".
  3. Double-click on “network.websocket.enabled”. That line should now show “false” in the ‘Value’ column.

See Tor bug 5741 for more details. We are currently working on new bundles with a better fix.

Tor 0.2.2.34 is released (security patches)

Tor 0.2.2.34 fixes a critical anonymity vulnerability where an attacker
can deanonymize Tor users. Everybody should upgrade.

The attack relies on four components:

  • 1) Clients reuse their TLS cert when talking to different relays, so relays can recognize a user by the identity key in her cert.
  • 2) An attacker who knows the client's identity key can probe each guard relay to see if that identity key is connected to that guard relay right now.
  • 3) A variety of active attacks in the literature (starting from "Low-Cost Traffic Analysis of Tor" by Murdoch and Danezis in 2005) allow a malicious website to discover the guard relays that a Tor user visiting the website is using.
  • 4) Clients typically pick three guards at random, so the set of guards for a given user could well be a unique fingerprint for her. This release fixes components #1 and #2, which is enough to block the attack; the other two remain as open research problems.

Special thanks to "frosty_un" for reporting the issue to us! (As far as we know, this has nothing to do with any claimed attack currently getting attention in the media.)

Clients should upgrade so they are no longer recognizable by the TLS certs they present. Relays should upgrade so they no longer allow a remote attacker to probe them to test whether unpatched clients are currently connected to them.

This release also fixes several vulnerabilities that allow an attacker to enumerate bridge relays. Some bridge enumeration attacks still remain; see for example proposal 188.

https://torproject.org/download/download-easy

Changes in version 0.2.2.34 - 2011-10-26

Privacy/anonymity fixes (clients):

  • Clients and bridges no longer send TLS certificate chains on outgoing OR
    connections. Previously, each client or bridge would use the same cert chain
    for all outgoing OR connections until its IP address changes, which allowed any
    relay that the client or bridge contacted to determine which entry guards it is
    using. Fixes CVE-2011-2768. Bugfix on 0.0.9pre5; found by "frosty_un".
  • If a relay receives a CREATE_FAST cell on a TLS connection, it no longer
    considers that connection as suitable for satisfying a circuit EXTEND request.
    Now relays can protect clients from the CVE-2011-2768 issue even if the clients
    haven't upgraded yet.
  • Directory authorities no longer assign the Guard flag to relays that
    haven't upgraded to the above "refuse EXTEND requests to client connections"
    fix. Now directory authorities can protect clients from the CVE-2011-2768 issue
    even if neither the clients nor the relays have upgraded yet. There's a new
    "GiveGuardFlagTo_CVE_2011_2768_VulnerableRelays" config option to let us
    transition smoothly, else tomorrow there would be no guard relays.

Privacy/anonymity fixes (bridge enumeration):

  • Bridge relays now do their directory fetches inside Tor TLS connections,
    like all the other clients do, rather than connecting directly to the DirPort
    like public relays do. Removes another avenue for enumerating bridges. Fixes
    bug 4115; bugfix on 0.2.0.35.
  • Bridges relays now build circuits for themselves in a more similar way to
    how clients build them. Removes another avenue for enumerating bridges. Fixes
    bug 4124; bugfix on 0.2.0.3-alpha, when bridges were introduced.
  • Bridges now refuse CREATE or CREATE_FAST cells on OR connections that they
    initiated. Relays could distinguish incoming bridge connections from client
    connections, creating another avenue for enumerating bridges. Fixes
    CVE-2011-2769. Bugfix on 0.2.0.3-alpha. Found by "frosty_un".

Major bugfixes:

  • Fix a crash bug when changing node restrictions while a DNS lookup is
    in-progress. Fixes bug 4259; bugfix on 0.2.2.25-alpha. Bugfix by "Tey'".
  • Don't launch a useless circuit after failing to use one of a hidden
    service's introduction points. Previously, we would launch a new introduction
    circuit, but not set the hidden service which that circuit was intended to
    connect to, so it would never actually be used. A different piece of code would
    then create a new introduction circuit correctly. Bug reported by katmagic and
    found by Sebastian Hahn. Bugfix on 0.2.1.13-alpha; fixes bug 4212.

Minor bugfixes:

  • Change an integer overflow check in the OpenBSD_Malloc code so that GCC is
    less likely to eliminate it as impossible. Patch from Mansour Moufid. Fixes bug
    4059.
  • When a hidden service turns an extra service-side introduction circuit into
    a general-purpose circuit, free the rend_data and intro_key fields first, so we
    won't leak memory if the circuit is cannibalized for use as another
    service-side introduction circuit. Bugfix on 0.2.1.7-alpha; fixes bug
    4251.
  • Bridges now skip DNS self-tests, to act a little more stealthily. Fixes
    bug 4201; bugfix on 0.2.0.3-alpha, which first introduced bridges. Patch by
    "warms0x".
  • Fix internal bug-checking logic that was supposed to catch failures in
    digest generation so that it will fail more robustly if we ask for a
    nonexistent algorithm. Found by Coverity Scan. Bugfix on 0.2.2.1-alpha; fixes
    Coverity CID 479.
  • Report any failure in init_keys() calls launched because our IP address has
    changed. Spotted by Coverity Scan. Bugfix on 0.1.1.4-alpha; fixes CID 484.

Minor bugfixes (log messages and documentation):

  • Remove a confusing dollar sign from the example fingerprint in the man
    page, and also make the example fingerprint a valid one. Fixes bug 4309; bugfix
    on 0.2.1.3-alpha.
  • The next version of Windows will be called Windows 8, and it has a major
    version of 6, minor version of 2. Correctly identify that version instead of
    calling it "Very recent version". Resolves ticket 4153; reported by
    funkstar.
  • Downgrade log messages about circuit timeout calibration from "notice" to
    "info": they don't require or suggest any human intervention. Patch from Tom
    Lowenthal. Fixes bug 4063; bugfix on 0.2.2.14-alpha.

Minor features:

  • Turn on directory request statistics by default and include them in
    extra-info descriptors. Don't break if we have no GeoIP database. Backported
    from 0.2.3.1-alpha; implements ticket 3951.
  • Update to the October 4 2011 Maxmind GeoLite Country database.

Tor 0.2.2.20-alpha is out (security patches)

Tor 0.2.2.20-alpha does some code cleanup to reduce the risk of remotely
exploitable bugs. Thanks to Willem Pinckaers for notifying us of the
issue. The Common Vulnerabilities and Exposures project has assigned
CVE-2010-1676 to this issue.

We also fix a variety of other significant bugs, change the IP address
for one of our directory authorities, and update the minimum version
that Tor relays must run to join the network.

All Tor users should upgrade.

https://www.torproject.org/download/download

Changes in version 0.2.2.20-alpha - 2010-12-17
Major bugfixes: read more »

  • Fix a remotely exploitable bug that could be used to crash instances
    of Tor remotely by overflowing on the heap. Remote-code execution
    hasn't been confirmed, but can't be ruled out. Everyone should
    upgrade. Bugfix on the 0.1.1 series and later.
  • Fix a bug that could break accounting on 64-bit systems with large

Tor 0.2.1.28 is released (security patches)

Tor 0.2.1.28 does some code cleanup to reduce the risk of remotely
exploitable bugs. Thanks to Willem Pinckaers for notifying us of the
issue. The Common Vulnerabilities and Exposures project has assigned
CVE-2010-1676 to this issue.

We also took this opportunity to change the IP address for one of our
directory authorities, and to update the geoip database we ship.

All Tor users should upgrade.

https://www.torproject.org/download/download

Changes in version 0.2.1.28 - 2010-12-17
Major bugfixes:

  • Fix a remotely exploitable bug that could be used to crash instances of Tor remotely by overflowing on the heap. Remote-code execution hasn't been confirmed, but can't be ruled out. Everyone should upgrade. Bugfix on the 0.1.1 series and later.

Directory authority changes:

  • Change IP address and ports for gabelmoo (v3 directory authority).

Minor features: read more »

The Debian OpenSSL flaw: what does it mean for Tor clients?

There have been a lot of questions today about just what the
recent Debian OpenSSL
flaw means for Tor clients. Here's an attempt to
explain it in a bit more detail. (Go read the Tor security advisory before
reading this post.)

First, let's look at the security/anonymity implications for users who
aren't running on Debian, Ubuntu, or similar. These implications all
stem from the fact that some of the Tor relays and v3 directory authorities
have weak keys, so the Tor network isn't able to provide as much anonymity
as we would like.

The biggest issue is that perhaps 300 Tor relays were running with
weak keys and weak crypto, out of the roughly 1500-2000 total running
relays. What can an attacker do from this? If you happen to pick three
weak relays in a row for your circuit, then somebody watching your local
network connection (or watching the first relay you pick) could break all
the layers of Tor encryption and read the traffic as if they were watching
it at the exit relay. read more »

Syndicate content Syndicate content