Tor security advisory: "relay early" traffic confirmation attack

This advisory was posted on the tor-announce mailing list.

SUMMARY:

On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.

The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.

Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.

Relays should upgrade to a recent Tor release (0.2.4.23 or 0.2.5.6-alpha), to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.

THE TECHNICAL DETAILS:

We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:
https://blog.torproject.org/blog/one-cell-enough

The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.

The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.

So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.

There are three important points about this attack:

A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)

B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.

C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See https://blog.torproject.org/blog/one-cell-enough for more discussion.

Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)

In response, we've taken the following short-term steps:

1) Removed the attacking relays from the network.

2) Put out a software update for relays to prevent "relay early" cells from being used this way.

3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.

4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".

The following longer-term research areas remain:

5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.

6) Exploring better mechanisms, e.g. social connections, to limit the impact from a malicious set of relays. We've also formed a group to pay more attention to suspicious relays in the network:
https://blog.torproject.org/blog/how-report-bad-relays

7) Further reducing exposure to guards over time, perhaps by extending the guard rotation lifetime:
https://blog.torproject.org/blog/lifecycle-of-a-new-relay
https://blog.torproject.org/blog/improving-tors-anonymity-changing-guar…

8) Better understanding statistical traffic correlation attacks and whether padding or other approaches can mitigate them.

9) Improving the hidden service design, including making it harder for relays serving as hidden service directory points to learn what hidden service address they're handling:
https://blog.torproject.org/blog/hidden-services-need-some-love

OPEN QUESTIONS:

Q1) Was this the Black Hat 2014 talk that got canceled recently?
Q2) Did we find all the malicious relays?
Q3) Did the malicious relays inject the signal at any points besides the HSDir position?
Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?

Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.

Seth Schoen

July 31, 2014

Permalink

Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

When it is assumed did the relays become entry guards? One day, one week or several days, weeks?

Seth Schoen

July 31, 2014

Permalink

Could you please clarify that:

To de-anonymize a user the malicious source must get an entry guard as well as an another one as "exit node" to the hidden service. It that correct?
Then: how is the probability to get entry AND " exit" node (a relay "middle node" instead of entry or "exit" wouldn't help) from this malicious source?

Seth Schoen

July 31, 2014

Permalink

So if I got this right, 6.4% of nodes were rogue? So that means for each conection to TOR there was a 6.4% chance you'd connect to one of the rogues, and then if you were accessing a HS, there was also a 6.4% chance the HSDir you queried was also rogue. So there's roughly a 0.4% chance that connection is affected.
BUT, if you did this 100 times over the affected period, there would be roughly a 1 in 3 chance it occured. Anyone care to chek my math?

It was 6.4% of the guard capacity. Tor load balances by capacity, so the number of relays isn't usually the right metric for things.

Also, the attack you have invented with your "if you did this 100 times" line is exactly what guard relays are designed to handle:
https://www.torproject.org/docs/faq#EntryGuards

The math is complicated by the fact that clients pick among three guards at once (so the chance of having one of them is more than 6.4% over time).

Yes, math ok. If first calculation comes from 6.4% times 6.4%, the result is 0.4096% or roughly 0.41% (Less than 1%). And your other number is ok wich comes from 99% secure raised to 100 times (meaning the probability of going clean all 100 times), which gives 36.6%, so yeah, roughly 1 out of three guys using the service 100 times will come out completely clean i.e., undetected...

No, you should read about how entry guards work.

If the entry guard you picked turns out to be bad, then over the course of the 100 circuits you'll probably pick a bad exit and you lose. But if the entry guard you picked turns out to be good, then all 100 circuits will be safe.

Seth Schoen

July 31, 2014

Permalink

Maybe some of you passionate, extra-communicative, literate Tor users could contact the reporters who penned this incredibly one-sided and garbled story for the Post-Gazette here in Pittsburgh, where the Software Engineering Institute is based (at CMU), and get them to perhaps at least include a reference to the incredible immorality of the researchers' actions in attacking this wonderful project?

Please? Seriously.

Carnegie Mellon engineers aim to unmask anonymous surfing software

http://www.post-gazette.com/business/technology/2014/07/31/2-CMU-experts-said-to-unmask-surfing-software/stories/201407310210

By Matt Nussbaum, Bill Schackner and Liz Navratil / Pittsburgh Post-Gazette
July 31, 2014 12:00 AM

Two Carnegie Mellon University researchers may have removed the veil of secrecy from the Tor Project, a free software program that allows users to anonymously surf the Internet.

Tor was the preferred mode of covert communication used by Edward Snowden, a former National Security Agency contractor. Mr. Snowden leaked a trove of confidential documents that revealed secret snooping of personal communications conducted by the NSA.

Governments in the United States, Russia and Britain have worked for years to unmask users of Tor, which, according to the British Broadcasting Corp., has been linked to illegal activity including drug deals and the sale of child-abuse images.

It seems that the CMU researchers may have cracked it.

Alexander Volynkin and Michael McCord work at the university’s Software Engineering Institute, whose efforts in Oakland are financed by the Defense Department.

Mr. Volynkin was slated to give a talk titled “You Don’t Have to be the NSA to Break Tor: Deanonymizing Users on a Budget” at the Black Hat USA hacker conference in Las Vegas beginning Saturday.

The talk was canceled after CMU said neither the university nor SEI had approved of the talk, according to a post by Black Hat organizers.

Messages left Wednesday night for Mr. Volynkin and Mr. McCord were not returned.

According to the university’s website, Mr. Volynkin is a research scientist and Mr. McCord is a software vulnerability analyst, both with SEI’s cybersecurity solutions department.

“Right now, I’m told we’re not commenting,” CMU spokesman Ken Walters said when asked Wednesday night about the scientists’ work.

SEI spokesman Richard Lynch said he could offer no elaboration beyond a schedule update added to the Black Hat conference website.

According to the Black Hat website, Mr. Volynkin has research interests that include network security, malware behavior analysis, advanced reverse engineering techniques and cryptanalysis. He wrote various scientific publications and a book on malware behavior analysis, and has a patent related to full disk encryption technologies, the site said.

One of Tor’s creators, Roger Dingledine, announced on his blog Wednesday that an attack on the site was discovered July 4.

He wrote that he believes the attack, initiated in January, was led by the CMU researchers, who were trying to “deanonymize users.”

“We spent several months trying to extract information from the researchers who were going to give the Black Hat talk,” Mr. Dingledine wrote. “They haven’t answered our emails lately, so we don’t know for sure, but it seems likely that” they were the hackers.

SEI is a federally funded research and development center with a mission to “research software and cybersecurity problems of considerable complexity,” according to its website.

In 2010, the Defense Department extended its contract with SEI through June 2015. The contract was worth $584 million, or a little over $110 million a year.

In what may have been a coincidence, the director of the FBI, James B. Comey, and the assistant U.S. attorney general for national security, John Carlin, came to Pittsburgh on Wednesday to laud the city's contributions to efforts to fight cybercrime.

Mr. Carlin spoke to a crowd of about 100 at the SEI in Oakland but didn't mention Tor.

Later, he stood beside Mr. Comey at a news conference at the FBI's Pittsburgh office.

Mr. Comey, who did not mention Tor either, said his plan to boost FBI staffing across the nation by 1,500 later this year will include sending more agents to the FBI's Pittsburgh office to focus on cybercrime.

Briefly touching on the first-of-their kind indictments of Chinese officials for stealing trade secrets from Pittsburgh-based companies, Mr. Comey said, “It is no coincidence the work that has come out of Pittsburgh is the product of something that I was just about to describe as magical. I hope it's not magical because that will be harder to replicate.”

Matt Nussbaum: mnusbaum@post-gazette.com or 412-263-1504; Bill Schackner: bschackner@post-gazette.com or 412-263-1977; Liz Navratil: lnavratil@post-gazette.com or 412-263-1510.

Seth Schoen

July 31, 2014

Permalink

You still have not solved the problems with MITM attacks?

Two simple working examples (attack on client users).

1) Ineffective way
Small device.
Mark packets before the Tor's servers. Need one devices and one modified Exit Node.

I theoretically realized a few years ago possibility to mark packets in the Tor network.
You can not solve this problem completely. Another slight modification of the node...
This is the simplest "attack", any semiprofessional hacker, familiar with the network of Tor, able to carry out such an "attack" alone.

2) Super effective way
Interception and substitution of network connections and traffic
+
Emulation of Tor network, including the master server.
Hardware or software.
Hardware - small device mounted on the Internet line, any place, before the Tor's servers and spoofing.
Need one device and nothing more.

There are much more clever MITM attacks (without the use of spy-devices), because "encryption" "system", node changing "system" and almost all the rest of the Tor's features for today is no actual.
It was actually and tough 10-14 years ago.
And for today, Tor - just outdated trash.

Seth Schoen

July 31, 2014

Permalink

I'm frankly amazed how complacent people are here. "Don't worry", "it's fine", "this is a conspiracy theory" in reply to others.

I think we have to make certain assumptions. If this attack came through that university network, and the university is impervious to FOIA requests, it seems reasonable to assume the universities are effectively acting on behalf of the government or power itself. It's a convenient front.

If governments specifically claim to have arrested people accessing illegal pornography over Tor, we should assume the worst and that the claim is true (even if it isn't). We don't know how it is true, whether it is a bug in FF, or some other, as of yet undiscovered defect but we should take the claim seriously. It may be governments with access to virtually the entire internet (NSA/GCHQ) blanket target Tor users, and assume they are doing something suspicious. I imagine this could be a genuine threat in itself.

So, what's your suggestion?

To stop using Tor?
Going on the web NOT anonymously?
Or do you think that political activists all over the world should stop their activities, just because some fanatics said that "we won't reveal our methods to track suspects" (without mentioning Tor, BTW!)?

These months we are looking at simultaneous attacks to the best security tools (Tor, Truecrypt), all of them lacking credibility.
Being panicked is just what they want.

Regardless of how hard it is for them to track Tor users, it's easier to to track non-Tor users; ergo, the more people they convince not to use Tor, the easier their job is.

Seth Schoen

August 01, 2014

Permalink

From http://archives.seul.org/or/talk/Jul-2014/msg00648.html

"Trying this right now gives (unexpectedly) only 121 Guards (> 2MBps) and 130 Exit nodes, really working."

Can you determine wether these smaller than expected numbers are the result of a dedicated blockage of nodes to push selected nodes into a favourite position?

arma

August 01, 2014

In reply to by Anonymous (not verified)

Permalink

I don't know what the person was smoking to produce those numbers.

So no -- you'd have to figure out what he/she was doing. My assumption based on the other things that person has done is that they're wrong here.

Seth Schoen

August 01, 2014

Permalink

will there be a torbrowser for ios? why isnt there one now?

what do you think of the OnionBrowser?

It's because we don't want to give you an app full of application-level privacy holes and then put the Tor name on it.

I'm glad the onionbrowser guy is working on it, since it helps everybody get closer to a world where we could have a safe browser on iOS. But that's not the same as having a package that normal users can use and get safety comparable to the Tor Browser.

It's nice that someone put effort into making an app like OnionBrowser. Unfortunately, there are fundamental limitations in iOS which prevent it being useful. For example, the system-wide video player leaks your IP address. Also, sites can access the HTML5 geolocation API from the embedded safari frame. This could make your iPhone anonymously send your GPS coordinates to an attacker!

Seth Schoen

August 02, 2014

Permalink

@arma: Would it be a clever idea to use my one free year of the amazon cloud service to set up my own obfsproxy bridge and use that very bridge as my entry node, knowing that it has not been tempered with as it essentially is my own?
In doing so, I would have a 12 month free subscription to a safe and uncompromised entry node - or am I missing something?

That is a new angle.
Could they temper with the actual running instance (or maybe have someone temper with it) or is it solemly about (for example) logging connections to the bridge?

Seth Schoen

August 02, 2014

Permalink

660 pedophiles from darknet were caught in UK last month, I guess this is how they did it. If all this was done to remove cp and drugs I don't really mind.

"If all this was done to remove cp and drugs I don't really mind."

1.) Regarding "CP": How much does merely removing (some of) the evidence of a crime do anything to help the victims of the crime or prevent future ones?

2.) Regarding drugs:
- Actions done by countries that glorify and promote alcohol. How much more deadly than alcohol are any of the substances in question that don't enjoy such legal and social blessings?
- What about people who are dying a miserable death, who find their only relief in marijuana. Forbidden even in such cases in many places still.

3.) Regarding both and anything else: Do you really believe the /ends/ (prosecuting what you consider evils) always justify the /means/ (arguably massive privacy and other civil liberty violations, massive expenditure of resources that could arguably be used more efficiently and for more urgent needs, etc.)?

Seth Schoen

August 02, 2014

Permalink

"tor is dead"

Every time he says this, we should all start thumping our drum and singing the Dies Irae from Verdi's Requiem at the top of our lungs, so that NSA will know we are listening!

Seth Schoen

August 02, 2014

Permalink

When facing state-sponsored attacks from lethal organizations like NSA (but also smaller intelligence agencies working for other nations, including nations adversarial to the USA), we need to continually be mindful not only of technical but also economic and socio-political considerations.

HRW (Human Rights Watch) and the ACLU (American Civil Liberties Union), two leading human rights organizations, have published a major joint white paper outlining the chilling effects of NSA's global panopticon on free speech, journalism, and democracy itself in the EU and FVEY nations:

https://www.hrw.org/news/2014/07/28/us-surveillance-harming-journalism-…

https://www.aclu.org/blog/national-security-free-speech/how-surveillanc…

https://www.eff.org/deeplinks/2014/07/nsa-surveillance-chilling-effects

http://www.wired.com/2014/07/the-big-costs-of-nsa-surveillance-that-no-…
Personal Privacy Is Only One of the Costs of NSA Surveillance
Kim Zetter
29 Jul 2014

One comment the authors often encountered in interviewing journalists and lawyers had an all too familiar ring:

Both journalists and lawyers also emphasized that taking such elaborate steps to do their jobs makes them feel like they're doing something wrong. As one lawyer put it, "I'll be damned if I'm going to start acting like a drug dealer in order to protect my client's confidentiality."

I have lost count of the number of times I have heard that over the last ten years, even from Pulitzer prize winning reporters. We need to all keep telling them: no-one likes acting paranoid, but let's not forget that WE are not in fact doing anything wrong. To the contrary, when we resist oppression, we are doing something RIGHT. Our enemies have been revealed as porn-passing lethal drone-striking kidnappers who lie to their own governments. THEY are doing something wrong. Citizens are not doing anything wrong by exercising our natural right of self-defense, and journalists and lawyers have a DUTY (to the public, to their clients) to use all available countermeasures.

"Maybe some of you passionate, extra-communicative, literate Tor users could contact the reporters who penned this incredibly one-sided and garbled story for the Post-Gazette here in Pittsburgh, where the Software Engineering Institute is based (at CMU), and get them to perhaps at least include a reference to the incredible immorality of the researchers' actions in attacking this wonderful project?"

No chance of that happening. "Reporters" like that think they are simply a cheering squad for the local business associations, or their publishers "most favored" politicians.

Carnegie Mellon has long-standing close ties to the USIC. Some researchers there are involved in things like exploiting AI for "algorithmic governance", for example. Every lolcat's ears should prick up whenever that term is mentioned because this notion is the foundation of the population oppression machinery to which NSA is feeding all that data ("collect it all").

Journalism is not quite dead, as the excellent work of journalists like Glenn Greenwald, Laura Poitras, Barton Gellman, Kim Zetter, Julia Angwin, and Marcy Wheeler shows.

Seth Schoen

August 02, 2014

Permalink

"[Tor] needs to be scrapped and a new network needs to be designed and implemented, Tor failed, come to terms with it, accept it, stop running a network that gets people pwnt on a daily basis now"

Tor failed? That is not the story told by the Snowden leaks, which suggest that when talking amongst themselves (so, not hawking anti-Tor FUD), the spooks confess Tor creates lots of problems for them. Good. We need to make even more problems for them, until THEY give up and go home.

Recalling that the capabilities of occasionally adversarial nations like Russia and China are comparable to those of the FVEY monster, GCHQ would hardly have deeply incorporated Tor into their own infrastructure if they thought that Russia or China could routinely deanonymize Tor users at will.

We are all caught up in an arms race pitting governments against (mainly) their own citizens. Tor alone is not enough, but because it has been continually developed, tested, attacked, fixed, and improved by smart researchers for a decade, it is one of the best understood and least unreliable platforms currently available to bloggers, journalists, organizers, whistleblowers, elected politicians, lawyers, and others (such as the spooks themselves) who often need to use secure anonymous communication.

An important point about NSA's human capital: while the agencies thousands of employees do include experts in arcane specialties in ELINT and such, for any novel problem (and these days many of its technical problems are novel, as the agency itself is constantly complaining), they try to bring in outside consultants. In particular, some of their tailored CNE (computer network exploits, i.e. malware) appear to be bought from "cybersurveillance-as-a-service" companies like Gamma International and Hacking Team, which also sell to the most oppressive governments on the planet, such as Saudi Arabia and Vietnam.

Citizen Labs of Toronto is a good example of the rising number of extremely smart and knowledgeable experts who are organizing to fight on-line oppression wherever it rears its ugly head. They just published an important new report in their long running series on exposing the misdeeds of companies like Gamma:

http://www.theregister.co.uk/2014/07/31/citizen_lab_alleges_middle_east…
Securobods claim Middle East govts' fingerprints all over malware flung at journos
Darren Pauli
31 Jul 2014

Citizen Labs has also just been profiled in Edward Snowden's hangout-of-record, Ars Technica:

http://arstechnica.com/security/2014/07/inside-citizen-lab-the-hacker-h…
Inside Citizen Lab, the “Hacker Hothouse” protecting you from Big Brother
Globe-spanning white hat network hacked for the Dalai Lama, inspired arms legislation.
Joshua Kopstein
31 Jul 2014

Seth Schoen

August 02, 2014

Permalink

"I think we have to make certain assumptions. If this attack came through that university network, and the university is impervious to FOIA requests, it seems reasonable to assume the universities are effectively acting on behalf of the government or power itself. It's a convenient front."

No need to assume. Many FVEY universities (mostly in USA, but also some in UK and a handful in Canada, Australia, and even New Zealand) have certain research groups which are very closely tied to their spooks. These ties are even public information, although the evidence is obscure and rarely discussed by media organizations. Carnegie Mellon is one of the best known examples.

If you know who Carnegie was, and who the Mellons are, it makes perfect sense that CMU acts on behalf of the real power behind the USG: the money interests.

Never forget that over the next few decades, even the US and UK governments privately acknowledge, the very notions of nation states and the rule of law will become irrelevant.

It is ironic in a way that in public they say such nasty things about "anarchists", because the fact is that they have themselves made the willing choice to abandon their responsibility to govern, in favor of encouraging the transition from laws and governance as humanity has known it, to "algorithmic governance". Which means "continual monitoring" by corporations who put us all into computer models fed by all that data the surveillance machinery is collecting 24/7/365, and use the model predictions to decide what "suasion" methods to apply to us individually in order to modify our behavior in ways which will benefit their bottom line. "Algorithmic governance" is the real anarchy, and it has been brought into being by the very governments which it is rapidly supplanting.

So chin up: NSA is doomed, even if we don't succeed in eradicating it within the year, as I hope we will. In the longer term, the USG may not be doomed to vanish entirely, but it will become irrelevant.

Many of the recent actions of the USG only make sense in view of its collective sense of desperation at losing all its power and prestige on the world stage. But of course it has people like Hayden and Alexander to thank for encouraging the growth of "algorithmic governance", so it nursed the viper at its own breast.

Watching the US and UK parliaments passing law after law which further reduces their own relevancy to governance reminds one of the self-destructiveness of the last stages of Romanov dynasty. That government also made decision after decision which further reduced its own ability to survive, as many intellectuals pointed out at the time.

Governments have always passed irrelevant laws; this doesn't change their ability to wage wars or imprison people. While the powerful are using big data more and more to make their decisions, they still remain powerful. We're a long way away from computers actually making the decisions and we'll probably never get there; the powerful like to maintain their hold on power.

Seth Schoen

August 02, 2014

Permalink

"Even if traffic confirmation/sibyl attacks were not feasible on Tor, isn't encrypted traffic recording by States putting them at risk indefinitely in the future anyway?"

Perfect Forward Secrecy is our friend here.

The situation is dangerous but not hopeless.

Remember, fabulous researchers like Citizen Labs are working on behalf o The People. We are not entirely friendless, although we certainly have a dismaying variety of state-sponsored enemies.

Seth Schoen

August 02, 2014

Permalink

" It may be governments with access to virtually the entire internet (NSA/GCHQ) blanket target Tor users, and assume they are doing something suspicious."

No need to suppose anything. This is verified fact, strongly documented in published documents from Snowden's trove.

So we all have a problem. But they have worse problems, so we can win the War on US.

Seth Schoen

August 02, 2014

Permalink

In recent weeks, certain entities have mounted quite an effort to persuade the Tor userbase to abandon Tor entirely. We can and should turn such "suasion" right back upon them. How? Just think of ways in which we can make them stop and think before sending more malware our way.

One effective way to do this has been suggested, in a slightly different context, by Nathan Yee (Univ. of Arizona):

http://www.theregister.co.uk/2014/08/01/bust_comment_crew_with_this_arm…
Security chap writes recipe for Raspberry Pi honeypot network
Cunning security plan: dangle £28 ARM boxes and watch crooks take the bait
Darren Pauli
1 Aug 2014

1. Set Hidden Service Baits
2. Capture
3. Share with Citizen Labs, CCC, EFF....
4. Reverse Engineer
5. Analyze
6. Trace back to Source
7. Publish
8. Dissuade future attacks

Security researchers: who would you rather outwit? NSA/GCHQ or some inconsequential professional criminal?

Seth Schoen

August 03, 2014

Permalink

I have a technical question:

As I understand, the attack was independent of the way tor gets used,
or is there any difference in using tor via e.g. the browser bundle, tails, linux liberte concerning such a type of attack ?