OpenSSL bug CVE-2014-0160

A new OpenSSL vulnerability on 1.0.1 through 1.0.1f is out today, which can be used to reveal memory to a connected client or server.

If you're using an older OpenSSL version, you're safe.

Note that this bug affects way more programs than just Tor — expect everybody who runs an https webserver to be scrambling today. If you need strong anonymity or privacy on the Internet, you might want to stay away from the Internet entirely for the next few days while things settle.

Here are our first thoughts on what Tor components are affected:

  1. Clients: The browser part of Tor Browser shouldn't be affected, since it uses libnss rather than openssl. But the Tor client part is: Tor clients could possibly be induced to send sensitive information like "what sites you visited in this session" to your entry guards. If you're using TBB we'll have new bundles out shortly; if you're using your operating system's Tor package you should get a new OpenSSL package and then be sure to manually restart your Tor. [update: the bundles are out, and you should upgrade]
  2. Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you're at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay (but remember that Tor's multi-hop design means that attacking just one relay in the client's path is not very useful). In any case, best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys. (You will need to update your MyFamily torrc lines if you run multiple relays.) [update: we've cut the vulnerable relays out of the network]
  3. Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug, this shouldn't allow an attacker to identify the location of the hidden service [edit: if it's your entry guard that extracted your key, they know where they got it from]. Also, an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.
  4. Directory authorities: In addition to the keys listed in the "relays and bridges" section above, Tor directory authorities might leak their medium-term authority signing keys. Once you've updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don't rotate that yet. We'll see how this unfolds and try to think of a good solution there.
  5. Tails is still tracking Debian oldstable, so it should not be affected by this bug.
  6. Orbot looks vulnerable; they have some new packages available for testing.
  7. The webservers in the https://www.torproject.org/ rotation needed (and got) upgrades. Maybe we'll need to throw away our torproject SSL web cert and get a new one too.
Anonymous

April 08, 2014

Permalink

Is there going to be a new os tor version coming out or do i have to update openssl? I have no idea how to do that. Any help??? Please

New release of the Tor Browser Bundle which include the security upgrade are on the way.

If you use another version of tor distributed with your operating system, you should ask the people who maintain that package for your operating system. You should also ask those who maintain openSSL for your operating system.

Anonymous

April 08, 2014

Permalink

Let me get this straight: an attacker can know what relays and clients are communicating to each other? and de-anonymize traffic?

Anonymous

April 08, 2014

Permalink

OK, I'm not as tech-savy as many on here. Am I understanding this correctly though?

The bug does not allow your computer to be directly compromised or identified? It can alow people to see what data is going backwards and forwards though. It can alow the first computer in TOR to see what other websites you have visited in this session and what cookies you have?

So if you follow tor's advice and never enter personally-identifying information, you are still safe, it is not like the big last problem then?

Thanks for any clarification, there must be lots like me who dont really understand the significance of thsi.

I would also really appreciate an answer to this. I am still pretty confused whether or not the tor browser bundle was compromised

It can allow the first relay in your Tor circuit to see what other websites you have visited in this session, yes. That's because your Tor client might keep past destinations (e.g. websites you visited) in memory, and this bug allows the SSL server (in this case, the first relay in your Tor circuit) to basically paw through your Tor client's memory.

If you visit https websites using your browser, though, your Tor client will never have your web cookies in its memory, because they would be encrypted -- just as the exit relay can't see them because they're encrypted, so also your Tor client can't see them because they're encrypted:
https://svn.torproject.org/svn/projects/articles/circumvention-features…
https://www.eff.org/pages/tor-and-https

The Tor Browser, which is based on Firefox, is not affected. But your Tor client, which is the program called Tor that comes in the Tor Browser Bundle and that your Tor Browser proxies its traffic into, is affected.

I'm sorry this is complicated. If the above doesn't make sense, a little box on a blog comment isn't going to fix it for you. I recommend starting at
https://www.torproject.org/docs/documentation#UpToSpeed

They could potentially show in memory yes. That's because one of the ends *is* your Tor client. And the other end is the hidden service itself. If the entry guard ran this attack in either case, it could potentially learn things.

Anonymous

April 08, 2014

Permalink

It can also get passwords can't it? So we should all be visiting all of the places we have passwords for like Amazon, eBay and websites where we leave comments and checking to see if we need to change our passwords for any of them also?

Yes, maybe. But that's a question about those websites and their past security practices, and has nothing to do with Tor or the Tor Browser Bundle.

That is, the way it would have been a problem, if it is, is due to security at the webserver end, not the webbrowser end.

Anonymous

April 08, 2014

Permalink

Would my privacy and anonymity be safe if I use tails to visit vulnerable clearnet/.onion sites and use vulnerable relays?

Anonymous

April 08, 2014

Permalink

I was running a virtual machine and I used the both the guest os(TAILS) and the host(windows) to visit vulnerable sites. Could the attacker read and decipher the memory content of the virtual machine?

As mentioned before: Tails was not explitable because it is using the older OpenSSL that doesn't support the heartbeat feature. You got lucky there, pal. ;)

Anonymous

April 08, 2014

Permalink

A client side question:

What does it exactly mean "... Tor browser is not affected but your Tor client is affected ..."? I understand when one operates the Tor browser it connects to the Tor network through the Tor client. What kind of data could leak out from the Tor client?

You can think of the Tor client like your VPN provider. Or if that metaphor doesn't make sense, think of it like your network.

For example, when you're sitting at Starbucks just using the Internet directly, people next to you get to see your network traffic and learn things about what you do -- if you only go to https sites then they learn which sites you go to but not what you send since it's encrypted. And if you go to an http site then they learn not only where you went but also what you said.

The Tor client sees exactly this same information.

Or if that metaphor didn't make sense, but you know about the Tor exit relay issue: the Tor client gets to see everything that all your exit relays get to see.

No, I think this is bad advice. TBB ships with its own libssl.

(The answer to the original question, if you're a normal user, is "you can read the TBB changelog to find out". If you're not a normal user, there are plenty of ways you can actually learn on your own, none of which fit into a little blog comment box.)

Anonymous

April 09, 2014

Permalink

I'm guessing those people using TBB 3.6 Beta-1 are affected but have no where to go for Pluggable Transports except the last stable TBB-PT which has its own security problems, hence the update to a newer Firefox ESR.

Anonymous

April 09, 2014

Permalink

Another out of bounds and pointer's arithmetic bug... shouldn't we just walk away from C and C++? These two languages make it simply too easy to make bad mistakes like these... i am quite sure that by just dropping it we could improve a lot our security. It is simply too hard for a human being to make correct programs with such languages.

Also... C was a good idea back in the 70ies... but with the hardware of today it makes little sense to use it (if not for OSes)... it doesn't even support multi-core processors natively (you need an external library to use threads, it is heavy and memory-hungry and the language itself has no safe native way for inter-process communication).

Most applications of today, Tor included, could be written in a modern language, without pointer arithmetics (which is useless) and with a garbage collector which frees the developer from having to remember to allocate or free memory areas. What about Google-Go for instance? It is fast, it is made to make it easy to write error-free software, it supports natively multi-core and multi-thread programming in a lightweight way, it supports natively interprocess-communication, it doesn't need pointer's arithmetics and it has a garbage collector...

Really, it is time to evolve. Especially if we care about security.

You are jumping to conclusions, because in the background behind your magic vm curtain everything behaves like before, and there memory and addresses and "pointers" are the fabric every programm runs on.

With VMs - that actually jitcompile your bytecode into native - you are just relaying your dependence on layers of layers of lay..., however if your bottom layer has a vulnerability your programm might be untouched however, the outcome isn't different.

A vm is just a layer and if one layer breaks your program is broken,
replace layer by sandbox.

What is really needed is a best practice "book", about code examples being vulnerable and how to do things better.

You are wrong. I am not jumping at anything.

I have several decades of experience in programming, in more languages that i can remember, and i perfectly know what i am saying.

And no. No "best practice book" or experience, no matter how long that is, can help you in this. This is the usual excuse that C and C++ fanboys throw at you whenever you tell them the pure and simple truth: the only thing that DOES help is a language that simply stops you from doing stupid things and that takes care of things such as memory management that are better left to the machine itself.

Please also note that who works on complex projects such as openssl are usually very experienced programmers with a huge background in security and "best practices"... they try their best to avoid bugs... yet they DO mistakes as this (very serious) bug proves. Probably now you see my point. No matter how many books you read. No matter how much experience you read. You will do mistakes. More so if the language seems to be DESIGNED to work against the developer itself.

And yes, in the bottom of every VM there is low level code. So what? I did not state that this solution would be perfect... i said that it would be BETTER.

In few words... it is much harder to find and exploit bugs in a VM (which, in time, gets better and better) than find and exploit bugs in several thousands (or even millions) lines of code in every software ever made because... well... you are human... and humans do mistakes. Humans forget things such as freeing a block of allocated memory, machines don't. Humans make math mistakes such as accessing an index outside an array of bytes, machines don't. Humans forget to free resources such as open files, machines don't. In simple words... Machines are just better than Humans at certain tasks... so we should just let them do it in our place. And unfortunately C and C++ fail at this.

We cannot completely avoid bugs, it is in our nature to make mistakes, but we can make the surface for a possible attack smaller. My proposal is exactly this: make the attack-surface smaller by using better, newer languages that make it easy to write better software and make it harder to make disastrous bugs.

Well, I think that C and C++ are still the best languages out there for doing actual computation, or more generally for anything where speed is important...unless you want to break down and code in assembly language.

That being said, I agree that they should not be the medium for network-facing applications. Type-safe languages may be the only practical way to prevent buffer-overrun attacks.

Well, to my understanding this bug has been there for one year or more.
If so... then the NSA could have found it and used it to deanonymize the whole Tor network...

This could explain how they found SilkRoad and Freedom Hosting servers.

Although i still believe that they used vulnerabilities on the servers themselves to gain root access and find out the real IP.

I think if we're going to do that, and maintain them all, we should seriously consider switching to a link encryption that doesn't use the SSL protocol at all.

That said, it shouldn't be *too* hard technically. Check out src/common/crypto.[ch] and src/common/tortls.[ch].

Anonymous

April 09, 2014

Permalink

Now Tor expert bundle 0.2.4.21 has OpenSSL 1.0.1g.
Do this fix the bug for the client?

Anonymous

April 09, 2014

Permalink

Isn't it kind of hilarious that human beings can both design cryptosystems that take multiple lifespans of the universe to break and yet also have them be undone by simple memory management bugs? When will the software development community adopt formal verification as a standard practice for critical programs? Save us BitC.

Um, for the record, in general, verifying the correctness of a computer program with another program is impossible...theoretically, you can't even tell if the program will finish executing, this is in fact what motivated the Turing machine in the first place.

I should qualify this to point out that this works a little differently for quantum computers...that is why Lockheed bought the D-Wave machine they keep at USC, actually. But, that caveat notwithstanding, I think the answer to your question regarding adoption of code verification is, more or less, "not any time soon."

Anonymous

April 09, 2014

Permalink

I have updated to the newest TBB and when I go to https://www.howsmyssl.com/ It tells me that my browser is still vulnerable. That's due to the use of TLS 1.0 by Firefox 24.4. Does this affect Browsing through Tor in any shape or form?

FF is capable of using TLS 1.2 but it's not enabled in FF 24 or even 26 as far as I know. I can fix this by modifying security.tls.version.max to 3 and security.tls.version.min to 1 in about:config . Would this modification single me out among other Tor users in any shape or form?

Thanks.

I can fix this by modifying security.tls.version.max to 3 and security.tls.version.min to 1 in about:config .

For the benefit of those who are not computer savvy, could you outline in greater detail on how to make those modifications to FF 24 and FF 26?

And what do Tor developers have to say about FF 24.4 using the very old version of TLS 1.0 instead of 1.2? Let us hear what they (through arma, probably) have to say.

- type about:config in the URL bar, then click on "I'll be careful I promise"
- search for "security.tls.version.max", double-click on it and change the number from 1 to 3
- (optional) search for "security.tls.version.min", double-click on it and change from 0 to 1 or 2
- search for "security.ssl3.rsa_fips_des_ede3_sha" just double-click on it, it will set it to "false"
- search for "security.enable_tls_session_tickets", double-click on it, it will set it to "true".

(thanks to https://blog.dbrgn.ch/2014/1/8/improving_firefox_ssl_tls_security/ and https://blog.samwhited.com/2014/01/fixing-tls-in-firefox/)

You may have to re-start Tor browser to make sure the changes were effective.

You can now re-test https://www.howsmyssl.com/, it should now be on "probably okay".

Note : SamWhited says that these settings were disabled by default because "firefox is vulnerable to downgrade attacks", I definetly don't know which one is better.

Note2 : you may want to check your browser fingerprint on https://panopticlick.eff.org/ before and after the changes. For me it was OK.

Anonymous

April 09, 2014

Permalink

Please help me understand. Could the hacker see the entire memory of the victim or only the memory being used by the tor client?

Are you really, absolutely positively sure of that ?
Yes, technically it may be true that the memory, which is used to send the (dummy) "heartbeat" data, belongs to the Tor client process; - at the moment that data is being sent - ;
but if the Tor client had to request (allocate) the ~ 64k bytes block of memory from the OS, that particular block of physical mem might happen to come from anywhere (depending on OS and unpredictable conditions) and still hold data which formerly belonged to other processes, system's or users'. Or does the OS zero out a block of memory it handles in response to a process's requests ? Surely not your vanilla OS, including MS Windows !

Assuming Tor requests a 64 k block for the stated uninitialised "heartbeat" operation, does it then release, or keep the same block for use in other heartbeat (or other) circumstances ? If the block is allocated once and for all, ot would mitigate the vulnerability as an attacker could not dig in the client's memory again and again by repeating the attack. OTOH if the block is released and a new block be allocated each time, then it's the worst possible scenario, depnding on precise OS kernel's way of satisfying memory requests, a large part of the system's physical memory contents might be grabbed by an persistent attacker.

Opinions, please ?

--
Noino

I had a similar train of thought and I think this is the most important question on this page.

Many people are using Tor for online research and making notes while browsing. Their text may be stored encrypted on the hard disk but in memory it is clear text. In addition an auto save feature of the word processors or text editor may put multiple versions of the clear text in dynamically allocated memory locations.

Say such a person downloads a video while writing and during this download every few seconds OpenSSL happily sends 64KB chunks of memory into the internet.

I would like to see a table with the threat potential for memory disclosures for the prevalent Windows OS from XP to 8.1 for administrator and user sessions as well as for Linux, if necessary with a differentation for 32bit and 64bit systems.

Every serious OS, including Windows, zeros the memory it hands out to programs. This is to prevent security issues like reading the memory of sensitive programs, including those that are a part of the OS themselves.

The problem is that OpenSSL has its own memory management, it does not use the memory management from the OS. It has been a known bug for I think 5 years that disabling the OpenSSL internal memory management when compiling results in a non-functional version of OpenSSL, because of all the memory handing bugs that exist in the OpenSSL code. This is why the OpenBSD folks are forking the code base and attacking it with chain saws, in order to get it down to a code base that they can audit and fix to their satisfaction.

Anonymous

April 09, 2014

Permalink

If being able to dump memory from a relay exposes users, wouldn't the admin that is running the relay be able to dump his memory ( via say gdb ) and expose the clients that are going though him ?