Tor security advisory: exit relays running sslstrip in May and June 2020

by isabela | August 14, 2020

What happened

In May 2020 we found a group of Tor exit relays that were messing with exit traffic. Specifically, they left almost all exit traffic alone, and they intercepted connections to a small number of cryptocurrency exchange websites. If a user visited the HTTP version (i.e. the unencrypted, unauthenticated version) of one of these sites, they would prevent the site from redirecting the user to the HTTPS version (i.e. the encrypted, authenticated version) of the site. If the user didn't notice that they hadn't ended up on the HTTPS version of the site (no lock icon in the browser) and proceeded to send or receive sensitive information, this information could be intercepted by the attacker.

We removed these attacking relays from the Tor network in May 2020. In June 2020 we found another group of relays doing a similar attack, and we removed those relays too. We don't know whether any users were successfully attacked, but from the size of the relays involved, and the fact that the attacker tried again (the first group was offering approximately 23% of the total exit capacity, and the replacement group was offering about 19%), it's reasonable to assume that the attacker thought it was a good use of their resources to sustain the attack.

This situation is a good reminder that HTTP requests are unencrypted and unauthenticated, and thus are still prone to attack. Tor Browser includes HTTPS-Everywhere to mitigate that risk, but it is only partially successful because it doesn’t list every website on the internet. Users who visit the HTTP version of a site will always be at higher risk.

Mitigating this kind of attack going forward

There are two pieces to mitigating this particular attack: the first piece involves steps that users and websites can do to make themselves safer, and the second piece is about identifying and protecting against relays that try to undermine the security of the Tor network.

For the website side, we would like to take the opportunity to raise the importance for website admins to always 1. Enable HTTPS for their sites (folks can get free certificates with Let's Encrypt), and to 2. Make sure they have a redirect rule for their site added to HTTPS-Everywhere, so their users use a safe connection preemptively, rather than relying on getting redirected after making an unsafe connection. Additionally, for web services interested in avoiding exit nodes entirely, we're asking companies and organizations to deploy onion sites.

One of the more comprehensive fixes we're exploring from the user side is to disable plain HTTP in Tor Browser. Some years ago this step would have been unthinkable (too much of the web used only HTTP), but both HTTPS-Everywhere and an upcoming version of Firefox have experimental features that try HTTPS first by default and then have a way to fall back to HTTP if needed. It's still unclear quite how such a feature would impact the Tor user experience, so we should try it out first at the higher levels of the Tor Security Slider to build more intuition. More details in ticket #19850.

In terms of making the Tor network more robust, we have contributors watching the network and reporting malicious relays to be rejected by our Directory Authorities. Although our "bad relays" team generally responds quickly to such reports, and we kick out malicious relays as soon as we discover them, we are still under capacity to be able to monitor the network and to identify malicious relays. If you've found a bad relay, you can report it by following the instructions on the bad-relays page. We will talk more about how you can help here at the end of this post.

Fundamentally, there are two hard problems here: (1) Given an unknown relay, it's hard to know if it's malicious. If we haven't observed an attack from it, should we leave it in place? Attacks that impact many users are easier to spot, but when we get into the longer tail of attacks that only affect a few sites or users, the arms race is not in our favor. At the same time, the Tor network is made up of many thousands of relays all around the world, and that diversity (and the resulting decentralization) is one of its strengths. (2) Given a group of unknown relays, it's hard to know if they're related (that is, whether they are part of a Sybil attack). Many volunteer relay operators look to the same cheap hosting providers like Hetzner, OVH, Online, Frantech, Leaseweb, etc, and when ten new relays show up, it's not straightforward to distinguish whether we just got ten new volunteers or one big one.

We have a design proposal for how to improve the situation in a more fundamental way by limiting the total influence from relays we don't "know" to some fraction of the network. Then we would be able to say that by definition we trust at least 50% (or 75%, or whatever threshold we pick) of the network. More details in ticket 40001 and on the tor-relays mailing list thread: here and here.

The Tor Project’s capacity

In 2019, we created a Network Health team dedicated to keeping track of our network. This team, in part, would help us more quickly and comprehensively identify malicious relays. Creating this team was an important step for the Tor Project. Unfortunately, in April 2020 we had to lay off 1/3 of our organization due to the fundraising impacts of COVID-19. This led us to reorganize our teams internally, moving Network Health team staff to other parts of the organization. Now all of us at Tor are wearing multiple hats to cover everything that needs to be done.

Because of this limited capacity, it takes longer than we would like to tackle certain things. Our goal is to recover our funds to be able to get that Network Health team back in shape. We also know we owe our volunteers more attention and support, and we are discussing internally how to improve that given our limited capacity at the moment.

These are not easy goals. We are still in a global crisis that is affecting many economically, including our donors and sponsors. Simultaneously, one of the main sponsors of the Internet Freedom community has been hit, funding has ceased, and we are helping in the fight to restore it.

We would like to invite people to support Tor in any way they can. There are millions of people who use Tor everyday, and your support can make a big difference. Making a donation will help us increase our capacity. Raising awareness about Tor, holding Tor trainings, running an onion service, localizing Tor tools, conducting user research, and running your own relay are also impactful ways to help. There are several Relay Associations around the world that run stable Tor exit relays and you can help by donating to them, too.

Comments

Please note that the comment area below has been archived.

August 14, 2020

Permalink

Can OONI detect this type of attack? "23% of the total exit capacity"? Wow. At what point could it be a state-level agency? HTTPS-first sounds great, and I hope it isn't easy for intermediaries to degrade it to HTTP. In my experience, the higher levels of the Tor Security Slider interfere with Javascript and WebGL, and HTTPS simply denotes whether Javascript is allowed to run. It's been that way for years.

An automated tool that tests a bunch of things is exactly what you want, yes. But what you really want is more of a "reverse OONI" -- the cool thing about OONI is that you get a bunch of different vantage points around the world, whereas here, you can do tests of all the Tor exit relays from a single Tor client.

One tool that's useful here is exitmap:
https://gitlab.torproject.org/tpo/network-health/exitmap
and it's designed in a modular way so you can make modules to test each new behavior you're considering.

The public version of exitmap comes with some modules already, and the bad-relays team has some non-public modules that they use for testing as well. You can read my tor-talk post from long ago for more thoughts on the tradeoff of non-public modules (since after all, we're generally fans of free software here):
https://lists.torproject.org/pipermail/tor-talk/2014-July/034219.html

As for the "https only mode" idea, and "I hope it isn't easy for intermediaries to degrade it to HTTP", yes exactly. The phrase you want to learn about is "downgrade attack":
https://en.wikipedia.org/wiki/Downgrade_attack
Protocol downgrade attacks are a real issue, and one of the usability questions here will be how to warn the user that https didn't work, and would they like to try http instead, in a way that leads the user to the right behavior.

Hope that helps!

August 14, 2020

Permalink

Great post. Wow, 2016? I have been waiting for HTTPS connections only, for a long time. Looking forward to seeing this implemented.

the https:// only options are coming in the next Firefox release. in FF-Nightly(version81), I am using normally, it is working already, you have to change it in config:about though; I think it is already working in FF beta(version 80) also . so once Tor Browser , using the FF-ESR comes to the version 80 it will be possible to use in Tor Browser and I hope it will be default in Tor-Browser.

August 14, 2020

Permalink

Hang on wait a minute. You mean there are cryptocurrency exchange sites that even *have* a functioning unencrypted http version of their site?? This whole "attack" -- and I say that in air quotes because the basic idea is extremely primitive and not Tor-specific at all, and in fact it goes back to the days of unencrypted WiFi and ARP cache poisoning -- would be impossible if the http://cryptocurrency.com was a simple page that said "Redirecting you to https://cryptocurrency.com".

Even if the attacker was preventing the redirect, the user would have been like "wtf is wrong with this thing. Why won't it redirect?" at which point the user might type https:// manually, or restart Tor browser and get a new circuit, or try again 10 minutes later and get a new circuit also. That is, in contrast to displaying a fully functional insecure version of the site, where the user will try to log in and transfer money.

Why would a site ever, *ever* allow cryptocurrency addresses, usernames, passwords, cookies, or any sensitive information to be accessible over http in this day and age?

I feel like I missed something important? Because all I see here is bad security practice on the part of the exchanges' web developers, and an age-old well-known weakness in the way Tor and insecure http have always worked.

Which, if so, is great news because it means Tor is still as strong as ever. I've always appreciated Tor Project's transparency about these things, and I can't express how important Tor is to me! Thank you!

Yeah, there is a catch. The website here behaves just as you described: if you connect to it over http, it gives you a redirect to its https version. It refuses to do anything else over http.

But imagine you're an attacker in the middle. *You* pretend to be the http website, and you run a reverse proxy which sends the traffic to the real website over https. Then the real website sees an https connection, so it's happy to send or receive sensitive information, but that https connection is being made by the attacker. Meanwhile the user is having an http conversation with the attacker, and thinks it's the real site but it isn't.

This is exactly the problem that HTTPS-Everywhere is trying to tackle: it's that first connection, where you use http, that could send you anywhere at all. If you rely on the website to redirect you, then if you never actually reach the website, you never get the redirect.

That said, yes you're right, this is an age-old attack and the answers are the same as they have always been. The reasons to mention it here are (a) the scale of the Sybil attack involved in deploying it this time around, and (b) to highlight some of the longer-term changes that we should make, like the shift to disabling http in Tor Browser, and the idea of partitioning relays into "known" and "unknown" sets and putting a cap on how much traffic the unknown set can get.

August 14, 2020

In reply to arma

Permalink

Ah, okay. I'm familiar with that kind of attack, I just didn't think it all the way through before posting. Thanks for clarifying and hopefully other readers will find it helpful as well.

The idea of partitioning known and unknown relays is new to me but sounds very interesting. I'll definitely read up on this. Thanks for bringing it to my attention.

The easy answer: .onion domains come with their own end-to-end encryption, so no, it is way less important to use https with them.

The more complex answer: it depends on the set-up for the individual onion service. Specifically, it depends where the Tor process that runs the onion service lives, relative to the website that it forwards its traffic to. In many cases, they will live on the same computer, and then the connection between them is over the local network, and there's no place an attacker can get in to attack. But let's say I run an onion service and point its traffic to http://wikipedia. Now the traffic between the user's Tor Browser and the onion service is protected (encrypted, authenticated) by Tor, but the traffic between the onion service and wikipedia is http (and thus attackable, if somebody is in the right place on the internet). That's one of the reasons why big sites that offer an onion service (Facebook, BBC, etc) offer their onion service over https: there is no single computer which runs Facebook's website, so there will be some traffic somewhere on the internet between the Tor that hosts facebook.onion and the website that hosts facebook.com, and it's safest for that traffic to be wrapped in its own layer of https encryption. In the future, we're heading toward having Let's Encrypt be able to give out https certs for onion domains for free, and then it will be something that more people do.

The answer to your actual question: onion services were not attacked here, and aren't vulnerable to the attack we're talking about here.

Shameless plug: watch the Defcon talk on onion services and what security properties they do/don't provide :) https://www.youtube.com/watch?v=Di7qAVidy1Y

August 14, 2020

In reply to arma

Permalink

Thanks! Should the Tor Project discourage the usage of EV certificates for .onion domains by the likes of DuckDuckGo, Facebook, etc. since the next Tor Browser will almost certainly no longer distinguish them like all modern browsers?

Yay https certificates. I don't think I have an opinion on what kind of https certs people should get. (Ok, I do have an opinion: you should get the free kind.)

The reason these folks got EV certs is because up until February of this year, EV certs were the only kind you could get for onion addresses. In February, thanks to the help of a friendly person from Mozilla (now at Fastly), we managed to get the official policy changed. So now you can get a DV cert for your onion domain:
https://cabforum.org/2020/02/20/ballot-sc27v3-version-3-onion-certifica…

Now, it turns out Let's Encrypt isn't ready to issue them yet. They put in a funding proposal to OTF to build the onion verification part, but, see above links about the OTF mess. So we're working to help them get funding from some other route.

August 14, 2020

In reply to arma

Permalink

Thank you once more, I learned a lot today, especially with the DEF CON video :)

August 17, 2020

In reply to arma

Permalink

This kind of information is very useful and I for one would like to see more official news (informed rumor?) like this here!

HSTS is okay, but only if you solved the problem that the very first request before receiving the HSTS information can still go out unencrypted and unauthenticated. So, alone it is not a means to solve the problem. So, getting on the HTTPS-Everywhere rule list and/or in the browser HTTPS preload list would be the way to go.

August 17, 2020

In reply to gk

Permalink

> getting on the HTTPS-Everywhere rule list and/or in the browser HTTPS preload list would be the way to go.

I urge Tor Project to maintain situation awareness regarding the experience of Tails users. Currently the rule list you mention loads early in a Tails session, but I fear that at times users in a hurry might arrive at an http site before the list loads.

I'd love to see more guest posts here from Tails Project. It seems that they did not post as they have in the past when a new version came out. I hope cooperation between Tails and Tor remains close.

August 17, 2020

Permalink

Thank you for this informative post. But I think you should name names.

> (the first group was offering approximately 23% of the total exit capacity, and the replacement group was offering about 19%)

Was this an announced family or a covert family? Earlier this year, posters complained about indications of suspicious activity by a large family in that time frame, and now we are of course wondering if you are talking about the family we think you are talking about.

If so, speaking for myself only, I do not use cryptocurrency so it should raise a red flag that I have observed anomalies while using a Secure Drop journalism site in the time frame, including:

o a huge flow of outbound traffic (dozens of megabytes) through the circuit in question, which seemed excessive to upload one small file (kilobyte size),

o mysterious circuit "freezes" (possibly caused by some kind of interference with the exit node, possibly *not* with the cooperation or knowledge of the person running that node) which resulted in failures to properly log out of a Secure Drop.

The nature of activity suggests that my adversary was likely FBI seeking to deanonymize a would-be whistleblower.

I believe Tor Project should assume that adversaries do run high bandwidth Tor nodes whose sole purpose is to deanonymize or infect a small number of targets. For example, it is said that FBI's high bandwidth Tor exit nodes recently targeted someone who had been abusing Tor for, you guessed it, on-line sexual abuse of minor children. However, while FBI is apparently willing to talk "off the record" about that kind of target, the history of FBI/CIA strongly suggests that troublesome journalists and whistleblowers are more representative of the target list. BTW, one important story which has not been widely covered is the revelation that CIA has begun a program of massive cyberattacks apparently targeting many US citizens (recent rule changes have lent this ugly activity a thin veneer of "legality").

In his book Dark Mirror, Barton Gellman praises Werner Koch for his years of lonely work maintaining GPG (fortunately some years ago others began to contribute which has greatly improved the health of GPG), in addition to praising Secure Drop. Please note that as governments around the world (e.g. Poland, Brazil, Belarus, USA) crack down on citizens who are trying to communicate information in the public interest to journalists, Secure Drop (and privacy/anonymity-enhancing software generally) are more vital than ever. Coverage of the recent election in Belarus has been largely drowned out by coverage of upcoming US election, which is alarming because there is good reason to think Putin's operatives have coached Russian-favored candidates in both Belarus and the USA in the art of fixing an election.

IMO, SD desperately needs some love right now, and I urge TP post here a tutorial on using Secure Drop sites safely. What kind of behavior is expected and what should raise an immediate red flag? How to check that the onion address you have is correct and that the site is running a fully patched SD? Is the list at an onion site allegedly run by FOTP

https://secrdrop5wyphb5x.onion/directory/

reliable and up to date? If so, be aware that my attempts to use that directory result in time outs.

SD had serious flaws which have been fixed but apparently many news sites never reinstalled SD and those that did (e.g. The Intercept) failed to publicize the new (much longer) onion address. I hope TP can work with SD, EFF, ACLU, and major news sites to help make sure that people know the correct onion address.

The webpage allegedly owned by The Intercept

https://securedrop.org/directory/intercept/

says that the current onion address for the Secure Drop operated by The Intercept is

xpxduj55x2j27l2qytu2tcetykyfxbjbafin3x4i3ywddzphkbrd3jyd.onion

I believe that users should stop expecting orgs to generate a semi-memorable short onion address. Rather, I suspect that best practice may be to keep secure drop sites on a dedicated USB, preferably the kind with a R/O tab.

August 20, 2020

In reply to Gus

Permalink

> the attack described doesn't affect SecureDrop users or other onion sites because onion services does not use exit nodes

Thank you Gus, that is very good to know!

Now I am wondering if my experience has something to do with the DDOS attacks on onion sites...

> As explained in the blog

I can't find any such explanation in the orginal blog post to which I was responding.

> Thank you for this informative post. But I think you should name names.

According to the very useful post by nusenu in medium.com (see the link in another comment on this page), two of the rogue exit nodes had the nicknames "smell" and "king".

August 17, 2020

Permalink

funny, how you do not mention nusenu who researched this issue A LOT

funny, how you ignore the fact that it was known for weeks that malicious relays around before those relays were removed

you should write a statement on this. maybe, next time, before first news sites start writing about this issue

I also read about this issue in blog posts written by Nusenu and about people showing that there is a larger issue in tor mailing lists. The bad relay reporting and removal timeframe must get shorter, to prevent future attacks earlier.

Could TP please say whether or not the continuing attack mentioned by nusenu in the Medium piece turned out to be the one sslstrip attack described by isabela above?

Alas, I recognize some of the nodes mentioned in the nusenu piece as ones I suspected of bad behavior.

@ Nusenu: The new medium "privacy policy" is quite nasty. Find another place to post. We cannot accept sites which claim that by continuing to use the site we are "accepting" their sharing of information with contractors, advertisers, LEAs, etc. God, the modern internet is disgusting.

Great link and a huge shout-out to nusenu!

The post is from Aug 2020 so it probably does describe the second phase of the attack described by isabela. Alas, I recognize several of the exit nodes named by nusenu as ones which I became suspicious of in recent weeks.

This quote is notable:

> So far 2020 is probably the worst year in terms of malicious Tor exit relay activity since I started monitoring it about 5 years ago. As far as I know this is the first time we uncovered a malicious actor running more than 23% of the entire Tor network’s exit capacity. That means roughly about one out of 4 connections leaving the Tor network were going through exit relays controlled by a single attacker.

TP really needs to discuss how we can prevent a well-funded and determined attacker from seizing more than 23% of exit node bandwith.

For newbies: a fairly small number of exit nodes have large bandwidth; most have much much smaller bandwidth. 23% of exit node bandwidth being controlled by a Sybil attacker running ssl strip means that one in four Tor circuits were stripped during July. This is very different from saying that an attacker controlled one in four exit nodes. In the attack described by isabela and nusenu, the attacker used about a hundred exit nodes and even registered them as an official family, which means that the attacker used a throwaway email to notify TP that these nodes formed a family under the control of a single entity. This does not mean that the attacker explicitly told TP that this was going to be a huge family of rogue exit nodes. But the lack of information about an operator who proposed to have access to the datastreams exiting one in four circuits should have raised a giant red flag. Yet TP accepted the new family without question.

Cannot TP at least ask operators of huge new families to check a box saying "I promise not to run sslstrip"? Even better, check that box for each node in the family.

Of course military hackers (US, RU, CN, IL, UAE etc) will not hesitate to cynically check the boxes in bad faith. But at least TP will have something which might one day be used to prosecute the bad actors. Here we can look to the trend in which US and RU governments are increasingly willing to indict foreign government hackers in their own courts, which at least can create travel difficulties for the bad guys. E.g. no more road trips for Alexander, Clapper, Haspel, or Hayden and their RU counterparts.

torproject.org --> Download Tor Browser --> Download for OS X (macOS). You can choose your language in the top right corner of the page. If that isn't enough, try the white links under the circles. In particular, "Download in another language or platform". Speaking of which...

DEVELOPERS! On Tor Browser's download page, don't you think it's about time to re-word "OS X" into "macOS"? Apple renamed it to macOS in 2016.

August 17, 2020

Permalink

", the arms race is not in our favor."

"arms race" is a to vaguely term. Mitigate plausible attack experiments would be a better good idea?
For example hardening crypto against Quantum computing not after brokening RSA/Curves/(AES) is in the press, long time after intelligence agencies have build it with billions secretly.

> "the arms race is not in our favor."

Not a quote from the blog or comments above, agreed? Source?

Since the subject has been raised I would like to stress that while the enemies of Tor include some of the richest, most powerful, and most unscrupulous governments which have ever existed, and all (it seems) of the richest and most powerful and most unscrupulous megacorporations which have ever existed (to some exent outside state control), by no means does everything always go their way. They have systemic weaknesses of their own (e.g. information overload) which we can exploit to increase our chances of long-term survival, just as we have systemic vulnerabilities such as the reliance upon volunteer node operators which our enemies do not hesittate to exploit.

August 20, 2020

Permalink

PLEASE do not ban plain HTTP.

Let's Encrypt is refusing to issue me a certificate. They are acting as internet censors. PKI should not decide who can publish on the web.

August 21, 2020

Permalink

Clearly nusenu's post and the isabela's post are discussing large scale Sybil type attacks, but I am not sure they are discussing the same attack.

However, I want to see TP acknowledged and respond to a very important point nusenu raises.

It seems that in at least one recent Sybil type attack, an adversary suddenly tried to register about one hundred new exit nodes. This was rejected because TP robotic resources recognized that this was a new family. But after the adversary officially registered the new nodes as a family, apparently using a throwaway email, the new family was accepted for "fast exit node" and other roles. It was only then that the adversary started using sslstrip and doing gosh knows what else.

This raises the question of why TP does not considering introducing new rules in which large families must not only be declared, but operators must provide some accountability.

I am aware that as TP often reminds users, the Tor network is run by volunteer node operators, but nusenu's point is self-evident: there is a world of difference between someone running one or two fast exit nodes (e.g. Edward Snowden, it is said) and someone who controls more than 23% of the exit node BW in the entire Tor network.

In the other thread (on the DDOS attacks on onions), TP proposes to discriminate among Tor users, or even to track them across the web with unique data "inserted" into unnamed software associated with Tor use which is not Tor Browser ("tokens"), an obviously horrible and unacceptable idea.

But in this thread, having more stringent vetting for anyone who controls more than a few percent of the entire exit node BW seems to be an idea worth exploring.

August 21, 2020

Permalink

Our enemies in the US Congress have introduced legislation which would encourage "website owners" (undefined in the drafts) to "hack back" (an undefined term in the drafts) when they suspect a device connected to their site is acting maliciously.

For obvious reasons (look for and read the EFF post on these laws if the reasons are not clear), this is a very bad idea which amounts to political grandstanding at the expense of everyone's security.

But if such laws were enacted, and assuming that Tor itself is not made illegal by them, it seems that in future, when TP detects exit nodes running sslstrip or doing something else illegal, TP can do some pwnage of its own, in an attempt to track back the attacker to his or her lair. Probably Citizen Lab can help. Attribution is difficult but in many cases investigated by CL, a C&C server was located which held exfiltrated information or other data which unambiguously identified the bad guy as a client of a specific cyberwar-as-a-service company. Among the outed companies are NSO Group and Hacking Team (which are apparently partnered), and Gamma International. Further in some cases CL was able to locate client lists in unsecured servers not actually owned by the attacker, but being abused by the attacker to hold stolen data. In some cases CL has even been able to unambiguously implicate particular governments, which have included UAE and MX.

August 23, 2020

Permalink

> PLEASE do not ban plain HTTP.

HTTP is a disease. HTTPS only. The so-called webmasters of sites using HTTP only need to step up to the plate and not be lazy and implement HTTPS only. This really is about laziness.

August 26, 2020

Permalink

One alternative to disabling http entirely is make https the default in the UI; i.e. if the user types a url without a protocol scheme (e.g. starting with "www.") , the browser interprets it as equivalent to a https:// URL, and only tries the http:// URL if the user explicitly types it. Is this what is meant by the upcoming Mozilla change?