Tor security advisory: "relay early" traffic confirmation attack

This advisory was posted on the tor-announce mailing list.

SUMMARY:

On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.

The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.

Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.

Relays should upgrade to a recent Tor release (0.2.4.23 or 0.2.5.6-alpha), to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.

THE TECHNICAL DETAILS:

We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:
https://blog.torproject.org/blog/one-cell-enough

The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.

The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.

So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.

There are three important points about this attack:

A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)

B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.

C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See https://blog.torproject.org/blog/one-cell-enough for more discussion.

Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)

In response, we've taken the following short-term steps:

1) Removed the attacking relays from the network.

2) Put out a software update for relays to prevent "relay early" cells from being used this way.

3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.

4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".

The following longer-term research areas remain:

5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.

6) Exploring better mechanisms, e.g. social connections, to limit the impact from a malicious set of relays. We've also formed a group to pay more attention to suspicious relays in the network:
https://blog.torproject.org/blog/how-report-bad-relays

7) Further reducing exposure to guards over time, perhaps by extending the guard rotation lifetime:
https://blog.torproject.org/blog/lifecycle-of-a-new-relay
https://blog.torproject.org/blog/improving-tors-anonymity-changing-guar…

8) Better understanding statistical traffic correlation attacks and whether padding or other approaches can mitigate them.

9) Improving the hidden service design, including making it harder for relays serving as hidden service directory points to learn what hidden service address they're handling:
https://blog.torproject.org/blog/hidden-services-need-some-love

OPEN QUESTIONS:

Q1) Was this the Black Hat 2014 talk that got canceled recently?
Q2) Did we find all the malicious relays?
Q3) Did the malicious relays inject the signal at any points besides the HSDir position?
Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?

Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.

Yes, but the field of adversarial machine learning is very young compared to typical machine learning. Or said another way, most machine learning algorithms fall very quickly to an adversary that knows the algorithm and tries to manipulate it.

See the papers on 'adversarial stylometry' for some fun examples here.

When you're doing any kind of AI against an adversary, independence assumptions turn into vulnerabilities that he can exploit. Just look at search engines. They have to devote considerable resources to thwarting the "SEO" guys who try to cheat their ranking algorithms.

Anonymous

July 30, 2014

Permalink

Interesting - that'd be quite nice, you could encode the hidden service address as zeros and ones and send it back down the circuit (e.g. EARLY = 0 RELAY=1). As EARLYs would never normally be sent backward, it's also trivial to spot a client that has been caught.

But as Roger says, traffic confirmation attacks are still very easy even with this patched - it's just that this attack made it marginally easier.

"it's also trivial to spot a client that has been caught" -- it depends what you mean here. It's trivial to notice (at the client) that somebody is sending you relay-early cells. But that doesn't tell you about whether your entry guard is looking for them. So if you mean "attacked" when you say "caught", I agree.

Anonymous

July 30, 2014

Permalink

Most likely the NSA "intercepted" the talk and made use of the technique for several months. Had it not been axed the hole would've been patched sooner. Isn't it obvious?

Actually, the NSA doesn't need to (and from the evidence we've seen, actually doesn't) run relays of their own.

But that shouldn't make you happy, since one of the huge risks is about how many parts of the network they can observe, not how many relays they operate. They don't need to run their own relays, if they can just wait until nice honest folks set up a relay in a network location that they're already tapping.

Now, the interesting thing about the traffic confirmation attack here is that you actually do need to operate the entry guard, not just observe its traffic (because you need to see inside the link encryption). So in fact the NSA would have to run a bunch of relays in order to do this exact attack.

But the more general form of traffic confirmation attack can be done (if you're in the right places in the network) by correlating traffic volume and timing -- and that can be done passively just by watching network traffic.

The two blog posts to read for more details are:
https://blog.torproject.org/blog/one-cell-enough
https://blog.torproject.org/blog/improving-tors-anonymity-changing-guar…

"you actually do need to operate the entry guard, not just observe its traffic (because you need to see inside the link encryption"
Accept, Heartbleed lets you dump the encryption keys... or did around that time, right ?

Correct -- that's part of why I mentioned a global adversary who logs traffic and then tries to break the link encryption.

Though to be fair, nobody has successfully shown that heartbleed could be used to extract link encryption keys from a Tor relay.

(Though to be extra fair, nobody has successfully shown than you couldn't.)

Yes, I get that the NSA's strength is in network observation.

Although they could also run numerous malicious relays, thousands perhaps, that would be too obvious too quickly to be very useful. Right?

Anonymous

July 30, 2014

Permalink

When I updated to 3.6.3Mac there is no reference to Tor 0.2.4.23. Then I click on "about tor" in the browser, it reads version 1.6.11.0, maintainer, Mike Perry. Is there a problem here?

Also, when I go to the tor announcement page at the bottom there is url: http://lists.torproject.org/pipermail/tor-announce/attachments/20140730…

that when clicked a window opens to open with GPGServices.service. When downloaded and opened a window appears that says "attachment-1.sig, Verification FAILED: No signature.

What is happening? Thank you.

There hasn't been a Tor Browser Bundle with the new log message released yet.

BTW, that 1.6.11.0 version number is for the Tor Button Firefox extension.

Correct, TBB 3.6.3 doesn't have Tor 0.2.4.23 in it yet. See
https://blog.torproject.org/blog/tor-security-advisory-relay-early-traf…
for details.

As for your gpg thing trying to automatically interpret the signature on the mailman archives, I am not surprised that it doesn't work. If you were on the tor-announce mailing list, and got the signed mail, then you could check the signature on the mail there. But the result will be that you can verify that I really sent the mail -- it sounds from your questions like you were hoping it would do something else.

Anonymous

July 30, 2014

Permalink

BBC News and other outlets in the UK recently (early-July) carried a set of reports regarding large numbers of people (over 600) who were arrested for accessing illegal material via TOR hidden services.

The reports extensively quoted police sources, who, amongst the usual fluff associated with such reports, explicitly claimed that the UK police and intelligence services could de-anonymise TOR hidden services, but declined to indicate how.

The dates quoted in the reports for the 600 arrests supposedly connected with de-anonymisation of TOR hidden services were "the last six months" preceeding mid-July, i.e. almost exactly matching the Jan 30th to July 4th window quoted above by the TOR foundation.

The story also appears to have been fed to the UK press by the UK police and Intelligence services within a few days of the compromised TOR relays being disconnected.

This may just be coincidence, but it smells fishy to me.

It is also worth noting that, according to BBC Wales reports, several of the 600 arrested subsequently committed suicide (before they could be either charged or tried).

Big busts like that are always hyped up in the media as much as possible. They want to make it sound as dramatic as possible, spread FUD among other criminals, win brownie points with tabloid newspapers, and so on. If a journalist asks you, the detective, "hey, that dark web thing, did some of the guys you arrested use the dark web?", of course you're gonna say "yes". But that doesn't mean that they broke tor itself. Maybe some of the people arrested messed up and deanonymized themselves another way. Maybe they were targeted with another Firefox 0-day. Who knows?

The announcement of this big "bust" also coincided with the rushing through of the UK's new data retention law through parliament. It coincided with a whole lot of things. There isn't anything to be gained out of speculating like this.

Presumably the increased interagency coöperation of Operation Notarise snagged a lot of low-hanging fruit, so the NCA can spread all the FUD it wants. That OPSEC is hard shall be borne out in the trial transcripts. Which should make for an interesting study nonetheless: there are more paedophiles than terrorists.

This is nonsense.

I assume that UK is a state ruled by law. Only accussing users having asked for hidden services without even knowing the exact service or page they have demanded, or not even know if they really visit the web site is not enough to arrest anyone.

I'd really be surprised if they weren't just doing the same old P2P file sharing stuff. That does fit the broadest definition of "dark net", i.e. not indexed by conventional search engines. And last time I looked people were still dumb enough to share illegal porn that way. (Based on file names only, I certainly didn't download any.)

Agree.
Using not-anonymous P2P is the most foolish thing to do with certain kind of stuff and there were some other european mass-stings in the previous couple of years, with hundreds of people investigated or arrested.
It could be just a PSYOP trying to scare Tor users, even if Tor wouln't be involved.

BTW, "dark net" could also mean Freenet, not necessarily Tor!

Who says those people didn't screw up something else? They could have used tor2web. They could have sent emails. They could have been in the Tormail database that the FBI seized.

Don't believe everything the press tells you.

Anonymous

July 30, 2014

Permalink

That number of 115/116 relays is too low. I checked the server descriptor archives from January to July and found a total of 167 relay fingerprints, and here's the kicker: January had the largest number (161), then decreasing from month to month until it's at 116 for June and July! (The IP address count was 121 from January to May, and 116 from June to July.) Someone should definitely analyze the data from a lot further back in time, because we might be looking at the wind-down phase of something.

Fingerprints January - July:
https://gist.github.com/anonymous/901239f40977e6045756

IP addresses January - July:
https://gist.github.com/anonymous/a0e0f0725f88c5dfc471

relayearly_extractor.sh:
https://gist.github.com/anonymous/1c5c9328acb8b686f155

There are definitely more attacker relay descriptors in 2013:

Lots of additional ones in December 2013 that match the original criteria (IP blocks + Unnamed + 0.2.4.18-rc), but in November it gets complicated because v0.2.4.18-rc was only released in the middle of that month, and the attacker did not immediately upgrade. Take a look at November's descriptors for relay fingerprint 06D5 508F 225A 3D94 C25B E4E7 FD55 1CAD 1CE3 5672, they used v0.2.3.25 and then only in January 2014 was it finally upgraded to 0.2.4.18-rc.

237 likely attacker relay fingerprints 2013-10 - 2014-07 (IP blocks + Unnamed + either of both versions)
https://gist.github.com/anonymous/a7f5addc58f5418e045b

Observed attacker platform descriptors:
32997 Tor 0.2.4.18-rc on FreeBSD
1700 Tor 0.2.4.18-rc on Linux
749 Tor 0.2.3.25 on Linux
42 Tor 0.2.2.35 (git-73ff13ab3cc9570d) on Linux x86_64
1 Tor 0.2.3.25 on Windows XP
1 Tor 0.2.3.25 on Windows 8

(Appearently using Windows as a development platform, which suggests that the attacker truly is evil.)

Updated relayearly_extractor_v2.sh (the first one didn't work if your grep is not egrep):
https://gist.github.com/anonymous/c714a58b2c7cebc1b051

My old netbook is quickly reaching it's limits here, performance wise, so can someone else please take this up?

Also people, if you have btrfs/zfs snapshots of your filesystem, that's a way to check your historical Tor state files.

I suspected that there were false positives and retried using a stricter approach, namely grepping for the original 116 relay fingerprints on the 2013-09 - 2014-07 descriptor dataset. This still returned matches dated 2013-12-26 - 2014-07-09, but the only platform was "Tor 0.2.4.18-rc on FreeBSD".

Anonymous

July 30, 2014

Permalink

A build for 0.2.4.23 isn't available on the repository for Trusty armhf, it's still on 0.2.4.22. Is this an oversight, or is it just still coming for ARM architecture?

Anonymous

July 30, 2014

Permalink

One question came up was "What about Tor users who connect to non-hidden services?" As is use Tor as a web browser.

For example www.google.com. I suppose the traffic confirmation attack would gather data but that would only work to identify the user and not the fact the user connected to Tor then went to google.com

You could do this from an exit node as well.

But as importantly, if you from an exit node visit an HTTP site, it is trivial for an adversary to do a traffic confirmation attack by injecting a bit of javascript into the fetched reply.

And even for HTTPS, the adversary can manipulate your traffic reply rate when they control the exit node for a confirmation attack.

The real innovation is that you can use it to say "who's accessing OR assigning a given hidden service", which is a particularly powerful capability.

No, I don't think that last part is accurate.

If you run a bunch of relays and you manage to get one of your relays into the HSDir position in the circuit and another in the entry guard position, you can do passive traffic volume and timing correlation between them to realize they're on the same circuit. You don't need to do anything active (either the header tagging approach described in this advisory, or the javascript injection thing you describe, etc).

So the "real innovation" here isn't that traffic confirmation attacks are now enabled in a new situation where they didn't work before. They likely worked just fine. "So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work."

For this specific attack, it doesn't look like they were deanonymizing "regular" users. Since none of the relays they set up were exit relays, you would never pick one of them as the last hop in your path to the website you want to visit.

Still, it's possible that the particular method they used could be modified to deanonymize "regular" users. Wiretap your exit relay, parse destination IP addresses from packet headers, encode them in RELAY/RELAY_EARLY sequences, then send them back down the circuit to the guard like before. Such an attack could discover that you (your IP address) connected to google.com at some date and time. If you weren't using HTTPS, it could find out that you (your IP address) connected to google.com and googled for kittens.

Also, it's important to realize that this is not the first and certainly not the last example of a traffic correlation attack on the tor network. It's one of the fundamental problems which is very hard (maybe impossible?) to get around in low-latency anonymity systems. You can help everyone mitigate the threat from these kinds of attacks by running a relay, which lowers the bad guy's chances of being your first and last hop.

The problem lies in the fact that many of us users are technologically deprived.
I'd love to "run a relay" or however it's termed because I would so enhance the security/anonymity benefits but lack the knowledge and skills to implement same.
And flinging me at anything less than a dummies guide for the ungeeks is simply a waste of your'n valuable time.
Like many on this planet I still use a XP and I'm not even sure the 1.5GHz OS would be able to support y'alls nodes and relays.
I think y'all need a higher class of clientele - which sorta leaves us poor, struggling revolutionaries who be aspiring to also be members of a stable proletariat out inna cold...

If your PC is on all the time anyway and you have a decent broadband connection, running a relay is helpful and not too difficult: https://www.torproject.org/getinvolved/relays.html.en

It is possible to run a relay on XP, but... XP is no longer supported by Microsoft and thus has an ever-growing list of known vulnerabilities which they don't plan to ever fix. So, you really shouldn't be using XP for anything, much less a Tor relay.

You don't need to buy a new PC to escape the security disaster that is Windows XP. I highly recommend trying some flavor of GNU/Linux. Many of them are actually very easy to use these days. Check out http://getgnulinux.org/ to get started.

If you can't switch immediately but want to try it out, you could also get a copy of Tails https://tails.boum.org/ a privacy-focused (Tor preinstalled) GNU/Linux distribution you can run from a CD or USB stick without touching your hard drive. You can't (easily) run a relay from Tails, though.

"XP is no longer supported by Microsoft and thus has an ever-growing list of known vulnerabilities which they don't plan to ever fix. So, you really shouldn't be using XP for anything, much less a Tor relay."

Amen to that!

OP here:
Grateful thanks fer y'alls responses.
Regrettably, living in a country with a fast falling exchange rate makes replacing my XP prohibitive and Win8 or similar ain't my preference. My personal browsing/computing hobby don't need any "enhancing" an' I love touch-pads. It's all I use as an interface - I don' wanna to keep needing to do the "gorilla-arm" thing.
My security never, ever bin breached 'cos I allus go offline [an' I don't follow dodgy links] after use - so bye-bye relays. I weren't aware that the device needed to be online 24/7.
When we were still being issued with a separate Vidalia interface I did notice that, betimes, some operators not allus online so I assumed that relay availability too would fluctuate. But in those days I wuz dialup with a cap on 512Mb monthly.
Now I broadband with a 2Gb month cap and I thought I could somehow add "support" to Tor security by adding m'self to the network on an ad hoc basis - thereby adding to my own security too.
What else kin I do then here to make our world a better, more secure place - and to stop me feelin' jus' like a parasitic appendage...

"My security never, ever bin breached 'cos I allus go offline"

LOL. Sorry, but if you're using XP and ever going online at all, your computer is most probably compromised by multiple people and organizations.

"What else kin I do then here to make our world a better, more secure place - and to stop me feelin' jus' like a parasitic appendage..."

Educate yourself, and then teach others.

Here are a few links:

How to get started using Tails (an easy way to start quitting XP): https://tails.boum.org/getting_started/index.en.html

Find a hackerspace near you, or start one yourself: http://hackerspaces.org

Find a cryptoparty near you: http://www.cryptoparty.in/parties/upcoming
...or start one yourself: http://www.cryptoparty.in/organize/howto

There are lots of free university courses online, for instance:
https://www.coursera.org/course/pythonlearn
https://www.coursera.org/specialization/fundamentalscomputing/9
https://www.coursera.org/course/crypto

There are more opportunities to educate yourself for free now than ever before. All that you really need (besides a desire to learn) is some free time and an internet connection. It sounds like you might be one of the people lucky enough to have those two things (and certainly, many people are not so lucky) so you should go for it!

Anonymous

July 30, 2014

Permalink

Can anyone shed any light on how the actual execution of this attack was observed?

If the attackers were only sending the hidden service name in response to HSDir reads/writes, the detector would need to to be or be accessing a hidden service for which the attacker was the current HSDir.

A relaying node could also detect relay_early cells in the wrong direction, but it wouldn't know they were related to HSDir traffic.

It sounds like the researchers begrudgingly dropped hints about the attack involving relay_early cells. This, combined with knowledge of the mysterious group of 115 relays joining the network, and the fact that they directly encoded the .onion address and sent it down the wire, means you could set up your own guard relay, watch for the relay, relay_early messages, and try to work out a pattern.

It's easier to set up your own client, and go to a hidden service, and see if you get any relay_early cells back. The relay at the HSDir point doesn't know who it's attacking when it's making the decision about whether to inject the signal.

You could just set up your own "testing" hidden service and "testing" client, and monitor them for a while. Eventually, your hidden service's descriptor will be served by a bad HSDir, and your client will pick a bad guard.

Anonymous

July 30, 2014

Permalink

I think that everything content which goes over the tor network should be strongly encrypted. So that if they de-anonymize you they should still have much fun decrypting your content.

During browsing, you communicate with servers whose certificate you can not easily verify manually, and agencies like the nsa can even produce faked or stolen ssl certificates for google and yahoo to perform man in the middle attacks https://www.schneier.com/blog/archives/2013/09/new_nsa_leak_sh.html

So you have to restrict your communications to people that you personally know and whose security certificate you can verify.

For this, retroshare http://retroshare.sourceforge.net/ does a good job, when it operates over tor.

Actually, to ensure security I create a TC volume and upload this to a random upload service. The PW can be anything up to max. characters. And by using Tor [ without Java or javascript ] I'm ensuring that any traces of my ip are negligible. The upload itself too has a short lifespan.
Of course, now that TC is no longer with us this isn't a viable method any longer - so I'm free to reveal it. But I'm sure you get my drift.

1)to those who said good luck talking to clearnet then:
I said, run retroshare http://retroshare.sourceforge.net/ over tor. There are tutorials for this on the net. For example here: http://wiki.piratenpartei.de/RetroShare#Paranoide_Konfiguration

https://www.whonix.org/wiki/Chat#RetroShare

The german pirate party calls this the "paranoid configuration".

So no, when you are using retroshare over tor, you are not "talking to clearnet then".

Use tor to anonymize your IP, and then retroshare, to encrypt your content.

2)
yes, truecrypt also does a good job to encrypt files. Actually, I do this: I run tor, over it retroshare, and with this I sent truecrypt container to my friends.....

Truecrypt is, however, not a good way to encrypt chat, email and voip. And that is what retroshare is good for. Note that your mobile is one of the biggest sources of metadata for every agency. With retroshare, you have encrypted voip, and when you run this over tor, then the agencies have much fun. They have to de-anonymize tor, and then crack the retroshare encryption, and for files they have to decrypt my truecrypt containers....

I wish them luck with that....

As for webbrowsing: There you communicate with webservers that you do not know personally. It is difficult for the average user to verify a google certificate. As I explained above, NSA can fake certificates for google servers. So the process of webbrowsing is probably an insecure thing by principle. Perhaps one should abandon webbrowsing altogether but instead restrict ones communication to individual people whose certificate one can verify. And that is why I recommended retroshare to be used over tor.

Anonymous

July 30, 2014

Permalink

I don't understand how the headers can make it past 1 relay to relay communication.

There should be no information that is transferred beyond a relay-relay transfer except perhaps the exit node, entry node, and hop count (and even those should be avoided if possible).

Client - request exit public key from directory of exit nodes
\|/
Node1-entry node (sends request to exit node)
\|/
Node3-exit node (responds with exit node public key)
\|/
Node1-entry node (responds with exit node public key)
\|/
Client - encrypts with exit node public key, sends that and Client public key
\|/
Node1 (requests Node2 public key)
\|/
Node2 (responds with Node2 public key)
\|/
Node1 (requests Node3 public key)
\|/
Node3 (responds with relay public key -- not exit node public key)
\|/
Node1 (encrypts data, entry node, and Node1 public key with Node3 public key, then also exit node with Node2 public key)
\|/
Node2 (decrypts Node1-Node2 information, requests Node3 public key with information about chosen exit node)
\|/
Node3 (responds to Node2 with public key)
\|/
Node2 (encrypts previously encrypted data with Node3 public key and sends Node2 public key)
\|/
Node3-exit node (decrypts data encrypted by Client (hopefully still SSL encrypted), sends request, receives response, encrypts with Client public key and then includes entry node in encryption with Node2 public key, encrypts request id with Node1 public key)
\|/
Node2 (decrypts with Node3 public key, has entry node, request id is still encrypted with Node1 public key)
\|/
Node1 (decrypts request id that was encrypted with Node1 public key, looks up client, sends data that is still encrypted with Client public key to Client)
\|/
Client receives response and decrypts data that was encrypted with its public key and then further decrypts any extra encrypted data (like SSL) from the response

With this way, there's not any room for non-necessary headers, which will mess up the data. The only way an attack like the one mentioned would work (as far as I can see), would be to tack on data to the encrypted data block which could be read by the exit node and the entry node and exit node could then correlate it. This would result in a majority of connections being bad, so it would be noticeable.