Tor security advisory: "relay early" traffic confirmation attack

This advisory was posted on the tor-announce mailing list.


On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.

The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.

Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.

Relays should upgrade to a recent Tor release ( or, to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.


We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:

The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.

The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.

So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.

There are three important points about this attack:

A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)

B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.

C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See for more discussion.

Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on or Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)

In response, we've taken the following short-term steps:

1) Removed the attacking relays from the network.

2) Put out a software update for relays to prevent "relay early" cells from being used this way.

3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.

4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".

The following longer-term research areas remain:

5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.

6) Exploring better mechanisms, e.g. social connections, to limit the impact from a malicious set of relays. We've also formed a group to pay more attention to suspicious relays in the network:

7) Further reducing exposure to guards over time, perhaps by extending the guard rotation lifetime:…

8) Better understanding statistical traffic correlation attacks and whether padding or other approaches can mitigate them.

9) Improving the hidden service design, including making it harder for relays serving as hidden service directory points to learn what hidden service address they're handling:


Q1) Was this the Black Hat 2014 talk that got canceled recently?
Q2) Did we find all the malicious relays?
Q3) Did the malicious relays inject the signal at any points besides the HSDir position?
Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?

Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.

If I'm reading your proposed protocol correctly, Node 1 knows the identity of Node 3? If so, there is no point in having a middle node. Read to learn about how tor works and to learn a little about the history of relay_early cells.


July 30, 2014




July 30, 2014


I was thinking about this after reading the article, seems to me that a way to be a bit more secure in using Tor would be..

1. Set up an OS on a virtual machine to keep it isolated
2. Get an account with an anonymous VPN provider that doesn't log.
3. Connect to the VPN
4. Connect to Tor

That way if the attacker did get your IP, it's not your actual address.. and if they gleaned information about your operating system, it's not your actual operating system.

Unfortunately I couldn't recommend which virtual machine software, operating system or VPN provider might be best since I've not done any research into it.. but it does seem like it would be a much more secure way of using Tor.

If you like GNU/Linux then Whonix is a visualized Tor operating system. The basic concept of Whonix is called 'isolated proxy.

Two VMs are used. One runs as a visualized Workstation (Debian, Fedora, even Windows if you'er crazy). The other VM runs as a Tor Gateway VM and routes the Workstation VM software applications over Tor.

This is what's called the isolated proxy concept. It prevents IP and DNS application leaks. You can read more about isolated proxies and Whonix here.…

Qubes is another Virutal OS, like Whonix, Except Qubes requires hardware visualization support and uses the Xen Hypervisor, instead of a host operating system like Whonix does. In theory, Qubes is probably more secure than Whonix because hypervisors have less of an attack surface than the Linux kernel running as a host OS does.

visual ==> virtual

Great work and keep looking for malicious nodes, because we can be sure bad guys will keep doing this.

"We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relay"

Hah! I remember the family and I thought they were up to no good based on my observations, which suggest that the nastiness may have been much worse than you report in the blog. If only it were easier to contact you anonymously with encrypted anecdotes, I would certainly have reported them soon after they started running those.

Some years ago another bunch of FDC servers were hired by another "offensive research" company. This earlier family was also notable for its inexplicable ties to the Chicago Board of Trade network.

Unfortunately, future malicious families will probably try to diversify their nodes,so may be harder to spot, but I am sure you have thought of ways and will continue to think of ways to spot dodgy families of nodes.

1. That probably would provide some additional security against application-level attacks, especially on Windows. However remember that VM breakout bugs are a thing. Also, if you use something like Tails, it's not persistent, so you don't have entry guards, actually making you more vulnerable to this kind of traffic correlation attack.

2. Who cares if a VPN provider claims they don't log? Maybe they log everything. How could you tell if they logged everything or not?

You should trust any VPN you don't run yourself as much as you trust a random router somewhere on the Internet. Which is not at all.

Nice that you're trying, but another MAILING list is not going to encourage people to report bad relays to you:

1. It is almost certain that our enemies monitor all your mailing lists.

2. We know from Snowden leaks that the enemy maliciously exploits bug reports to attack "targets". And we know that all Tor users are among their targets.

3. It is almost certain that our enemies use their database of vulnerabilities found in individual personal electronic devices (recall the JTRIG "tool" which does a torified nmap scan of every device "in an entire country" which is attached to the internet) to tailor their attack to particular devices. It follows that anomalies observed by users are likely to be tend to de-anonymize them if users report what they have seen in unencrypted communications.

Why wouldn't the "researchers" fully inform the tordevs of all vulnerabilities ahead disclosing their findings so that they can work to address them? Seems very reckless if you're not a bad actor. What is CERTs role in this situation?

There are many possible reasons, all plausible to various degrees. E.g.

1. They wanted to disclose to the Tor project, but CERT didn't let them
2. They wanted publicity and dramatic effect by revealing this attack in the wild at Black Hat
3. They were going to disclose to the Tor project, but far later than expected

As always, without the actual details all we can do is speculate, which isn't always the most useful thing we could be doing.

Does the Tor Project expect more data from the researchers ?

Many thanks to Philip and Damian for "adopting" the node monitoring task.

Tor Project is a victim of its own success, in the sense that Tor has been so successful that increasing numbers of well funded entities are pouring serious time, money, and effort into subverting the Tor network.

Some users have long urged the Project to pay more attention to projects like node monitoring, as well as to devising strategies in legal/psychological/economic/political space. The developments reported in the blog show that we were right. Inevitably this will take time away from work in coding space, but it has to be done.

I'm sure this whole fiasco is working wonders for Alexander Volynkin and Michael McCord's reputations as security researchers.

Maybe running Tor over Tor might have mitigated this specific problem.
i.e first tor exit asking second running tor to access the HSDir.

Since, if I'm understanding this correctly, the tag would have been seen as far as the second running tor guard node, but then gets encapsulated and pushed down the first tor exit and on to the client.

But yet again, even Tor over Tor can't help against the other types of traffic confirmation attacks. Nor can vpn for that matter.

All that can be done right now is running more "good" relays!

The answer to that is "maybe, maybe not". There hasn't been enough research into what happens when you connect to tor, over tor. What if you pick relays 1, 2, and 3 for the first circuit, then you pick relays 3, 2 and 1 for the second circuit? Will that deanonymize you? Maybe.

You will have to manage your circuits better to avoid the scenario you mention. The first instance runs as normal but uses just 1 guard or a bridge, this will be default for everybody very soon. On the second instance, make it also use just 1 guard, but add an exclude clause to the configuration file to exclude the guard(s) of your first instance. You can take the extra step of excluding the guards of the second instance to the first instance, but that might not be necessary, since you are only concerned with your immediate guard not being used on the circuit path of the second instance. The other scenarios or loops that can happen, are not ideal but you can live with them, as long as it does not happen to your immediate guard. note that the strict nodes option is not used. As for allowing the first instance to recycle circuits and closing "dirty" circuits, run a cron job script on the first instance that gets the circuit numbers of the streams that are open, and close them with CLOSECIRCUIT command every now and then. More thought and research should be done on this. It's a better alternative than the more prevalent vpn to tor suggestions that are floating around.

Suppose my relay is running, and suppose it's the middle hop between adversary's guard node and adversary's hidden service directory. Will it kill circuits sending relay_early cells backwards? Or is that impossible, since it doesn't have the keys to decrypt that stream?

It will indeed kill circuits if it sees an inbound (towards the client) relay_early cell.

It doesn't have to decrypt the stream to see it, because whether a cell is relay or relay_early is a property of the (per hop) link, not a property of the (end-to-end) stream.

Awesome, now post the data that is relevant so we can track these people down and end them.

Or are the maintainers of Tor too scared to let everyone know who's attacking them, and are in fact trying to cover up the addresses of those trying to attack us?

Get to it, bois. Start dropping information.


Huh? What data? We all know who the researchers who planned to give the BlackHat talk are. I think arma made it very clear in the blog post that we don't know 100% that they are definitely the ones behind this particular attack, but it's most likely them. The tor project knows as much about the researchers than any other public entity.

Note that they had to go through the trouble of the Sybil attack because they are not a global adversary. So, probably not NSA.

Russia offers $110,000 to crack Tor anonymous network

Should we be concerned?

Other government agencies would pay more; if someone is trying to break tor for cash, they're not going to report it to the Russians.

So no, you shouldn't be concerned over Russia wanting to break tor given that most intelligence agencies want to. Unless you live in Russia all the other agencies (with more money) are more important, and if you live in Russia based on what I've heard you should be more concerned about what actions the government might take against you for simply using tor.

Nah, either its a weird form of psyops, or the Russian government is hilariously naive.

If you really could "crack Tor anonymous network", you could probably make a lot more money by going to intelligence agencies in the US or the UK. $110,000 is chump change even for a private exploit contractor like VUPEN.

All these scary dramatic articles are just click bait written by journalists who don't really know what they're talking about.

Thank you for taking the time to explain all these technical details. I'd be lost without the blog author breaking down how Tor works. I'm learning a lot!

I'm glad the technical details were useful! I want everybody to get up to speed on Tor (as much as you're willing to) so we're all in a better position to decide what sort of a world we want to live in.

Guys, while at it. Let's fix hidden service referer leak too:

Sounds good, submit a patch!

We are not going to change the fate of the world with an anonymity network that is pwnt to fuck and back with $3,000 arma, you are delusional. Tor is pretty much a honeypot more than anything else these days. You guys need to be way more proactive.

1. The idea of using PIR for HSDIR lookups was suggested a long time ago, never implemented of course, would have protected from this attack

2. Using a single entry guard and slowing rotation should have happened four years ago when it was first being suggested, would have greatly reduced the damage of this attack

3. Tor is way too fucking vulnerable to confirmation attacks the entire thing needs to be scrapped and a new network needs to be designed and implemented, Tor failed, come to terms with it, accept it, stop running a network that gets people pwnt on a daily basis now

You guys will never understand defeat because you repeat the "Tor is the awesomest" mantra just like any other propaganda that people repeat over and over and then come to blindly accept as truth. Sorry, I would be better off using a VPN chain than a low latency network that arbitrary assholes can inject nodes on.

Tails users are especially fucked of course because they go through entry guards like a junkie goes through lines of cocaine, who would have ever guessed that was a horrible idea?

Please stop with your ignorant FUD. You've been spouting this all over the blog and the IRC. I'm sorry, but Tor is not "pwnd as fuck", and as much as you hate to hear it, low-latency anonymity networks will never be immune to confirmation attacks. The best we can do is make it very hard both technically and legally, and that can be best done by improving the code and having more users run relays. I know you have a hard-on for high-latency mixnets, but they have a totally different purpose. No one could edit Wikipedia or chat with a friend using a high-latency network, it's just not possible. For networks capable of transmitting data through exit nodes on the time scale required by computers using TCP/IP and which have a sane TTL (aka "all of the internet"), a low-latency network is required. As for using a VPN chain, Tor is not just a 3-hop proxy, it has many other features which make it resistant to a wide variety of attacks.

Please stop spreading your "zomg tor is b0rked!1" nonsense and naively suggesting we move to a poorly studied, extremely difficult to implement, high-latency mixnet which would be incapable of communicating with anything but itself. If you want that, go to the painfully slow, and questionably secure, Freenet (not that I have anything against Freenet, I like it, but it is not a replacement for an anonymity network).

To go over your points one by one:
1) Tor was never meant to focus on HSes, they were always an afterthought. If your priorities for fixing HSes is to implement PIR, then you need to take a long hard look at how they work. The next generation of HSes will be focused on fixing issues that are impossible to fix without rewriting them. Wait for those, and please don't make your very first point the criticism of an afterthought.
2) There is a lot of math involved in that. I suggest you read some of the papers involving such things before assuming it's as simple as that.
3) See my original response to your misunderstanding of the differences in the purposes of high and low latency mixnets.

>You guys will never understand defeat because ...
Ah yes, throw in the obligatory fanboi accusations that come up as soon as you realise how little substance the rest of your points have.

>Tails users are especially fucked of course because ...
And that is Tor's fault how? It's as if you criticized internal combustion engines by saying "Wow, look how much polution this releases when the catalistic converter is removed! Internal combustion engines suck!". You don't see the irony in that?

I'm sorry if I sound excessively confrontational, but you've been spouting this FUD non-stop, and it's getting old.

In short words, Tor fuckins sucks, I give up!

Hi, a new version of the Tor network is blocked in Iran torbrowser3.6.3 bridges just do not work please help the Iranian regime uses more advanced techniques you are so retarded

You should first learn to use punctuations and write in proper English. I can use tor just fine in Iran... it is shame you bash what you don't understand.

Same person as above, just thought to give you some instructions... When using Tor Browser Bundle you should select the "configure" option, Select "No" for the first two questions and "Yes" for third one, then click connect. it should resolve the problem in most cases.

This is experimental software. Do not rely on it for strong anonymity.

it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

Even if traffic confirmation/sibyl attacks were not feasible on Tor, isn't encrypted traffic recording by States putting them at risk indefinitely in the future anyway?
I mean, some cryptographers don't predict a brilliant future for the maths underlying current public key cryptography. It would mean that Diffie-Hellman could be broken and the subsequent recorded Tor traffic decrypted in a near future (5 years, 10 years?), when lots of Tor users taking risks on Tor would still be alive.

Betting your life on the maths behind public key cryptography is also a serious risk.

After the Heartbleed problem end of april more and more relays upgraded SSL and exchanged keys.
The discussion was about to close out relays not doing so.
Two weeks later I checked the lifetime of relays and found the following:

about 90 relays 9001/9030 unnamed average 4000-5000KB Tor on FreeBSD Guard
50.7.134.* 86d
50.7.159.* 84d
50.7.110.* 84d
50.7.111.* 84d

about 15 relays 9001/9030 Unnamed average 4000-5000KB Tor on FreeBSD Guard
204.45.252.* 119d/86d

End of April they were online between 84 to 119 days which was far before Heartbleed.

Was FreeBSD save against Heartbleed at that time ?

"using one entry guard rather than three"
"extending the guard rotation lifetime"

I have the impression you focus on the threat of an attacker following the circuits from content server to entry guard. That an attacker wants to know who is looking at that content.

While I think many users are just worried about the opposite. That an attacker who already knows them wants to know what they are doing online.

At least reducing the entry guards makes it a lot easier to judicially order the one entry guard's ISP into submission.

Do you think the Tor network would be able to bear getting rid of the entry guard model and randomly pick a new relay every x requests?

For example web page with 50 graphics. Randomly picked relay A gets requests for the page and the first nine graphics, randomly picked relay B requests for the next ten graphics and so on.
If there is a larger download like an embedded video the formula could be to change the relay every 10 objects or if the sum of object sizes is larger than 1 MByte.

Since anyone can run a relay, there will be malicious relays. The Tor Project tries to catch and remove them, but they can't prevent malicious relays entirely as long as it is an open volunteer network.

So, every time you randomly pick a relay, you have some chance of picking a malicious relay. The purpose of guards (entry nodes that you keep using for a long time) is to roll the dice less often. Assuming you aren't moving around, this makes lots of sense as the only thing the guard knows is your IP (and the middle nodes you're connecting to).

Assuming you are moving around, however, you might consider either not using guards or using a different set of guards for every location. See for details.

If your treat model is concerned about an attacker who knows who and where you are and is willing/able to perform sophisticated judicial or extra-judicial attacks against you specifically, I have bad news: they can most likely deanonymize you at least some of the time, even if your guard is honest. One way is by correlating surveiled traffic from your internet connection with surveiled traffic at the exit (you'll eventually use one they can see, or perhaps one they own) or if they're also already looking at your destination (outside tor) they could also correlate traffic there without even monitoring a single part of the tor network.

Tor does not claim to be able to protect against traffic confirmation attacks, particularly by a global passive adversary and especially not against a global active adversary. :(

The good news is that now that these types of threats are no longer hypothetical (although law enforcement use of them is still secret and obscured by parallel construction) there are renewed efforts to build tools that *are* resistant to them. I expect that we'll soon see Tor making it much harder if not impossible to do these kind of attacks passively, and forcing adversaries to go active is a win as they have a lot more chance of getting caught that way.

Wait a minute you guys pay people to break into TOR... The Russian government is offering 3.9 million roubles, around $111,000 or £65,000, to anyone who can produce a system for finding data on those using Tor.

Ok, here's the source:

ev_guid_16 = ev_guid_16
ev_guid_22 = ev_guid_22
ev_guid_32 = ev_guid_32

WRITE: /, ev_guid_16, ev_guid_22, ev_guid_32.

Send HTTP GET requests to:{format}/{ip_or_hostname}

fingerprint('image/exif/gpsCoordinates') =
file_ext('jpeg' or 'pjpeg' or 'jpg' or 'pjpg' or 'tiff' or 'gif' or 'png' or 'riff' or 'wav') and
'exif:GPSLatitude' or 'exif:GPSLongitude' or 'exif:GPSDestLatitude' or 'exif:GPSDestLongitude';

The API supports both HTTP and HTTPS.

Now where's my Rubles?

The annual risk of someone being hit by a meteorite is estimated to be one chance in 17 billion, which means the probability is about 0.00000000006 (6 × 10−11), equivalent to the odds of creating a few tens of trillions of UUIDs in a year and having one duplicate.

In other words, only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%. Or, to put it another way, the probability of one duplicate would be about 50% if every person on earth owned 600 million UUIDs.

So tracking user's via TOR using GUID and UUID as you attack vector not an impossibility, nor an improbability. The source code's for XKeyScore where written in C so it stands to reason if you wanna de-anonymise millions of people just to obtain a better pay grade, these things can be done and done very easily. Why waste time with a System that is inherantly broken and by all accounts broken quite baddly, if people go around degrading the security standards to the point where anyone can just come along and do this kind of evil crap, then it's time to go back to the drawing board.

Unless you like the idea of walking around with your brain being controlled by Nano-Robots serving Google Ad's 24/7!

Do you expect me to Talk? No Mr OOG we expect you to Die!

Does this Logo remind anybody of anything?

When it comes to the secret services.. I think of a few sterling examples of all there hard work, importing nazi's under project paperclip, torturing kids under mk-ultra, inventing the atomic bomb and when it comes to there logo of a giant octopus well it's kind of surprising how that can look so much like a super-villian organisation from a Ian Flemming Novel.