What's new in Tor

by dgoulet | December 19, 2016

Today, we've released the first stable version of the 0.2.9.x series, bringing exciting new features to Tor. The series has seen 1406 commits from 32 different contributors. Please, see the ChangeLog for more details about what has been done.

This post will outline three features (among many other things) that we are quite proud of and want to describe in more detail.

Single Onion Service

Over the past several years, we've collaborated with many large scale service providers such as Facebook and Riseup, organizations that deployed Onion Services to improve their performance.

Onion services are great because they offer both anonymity on the service and the client side. However, there are cases where the onion service does not require anonymity. The main example of this is when the service provider does not need to hide the location of its servers.

As a reminder to the reader, an onion service connection between a client and a service goes through 6 hops, while a regular connection with Tor is 3 hops. Onion services are much slower than regular Tor connections because of this.

Today, we are introducing Single Onion Services! With this new feature, a service can now specify in its configuration file that it does not need anonymity, thus cutting the 3 hops between the service and its Rendezvous Point and speeding up the connection.

For security reasons, if this option is enabled, only single onion service can be configured. They can't coexist with a regular onion service. Because this removes the anonymity aspect of the service, we took extra precautions so that it's very difficult to enable a single onion by mistake. In your torrc file, here is how you do it:

HiddenServiceNonAnonymousMode 1
HiddenServiceSingleHopMode 1

Please read about these options before you enable them in the manual page.

Shared Randomness

We've talked about this before but now it is a reality. At midnight UTC every day, directory authorities collectively generate a global random value that cannot be predicted in advance. This daily fresh random value is the foundation of our next generation onion service work coming soon to a Tor near you.

In the consensus file, they will look like this; if all goes well, at 00:00 UTC, consensus will have a new one:

shared-rand-current-value Hq+hGlzwAVetJ2zkO70riH/SEMNri+c7Ps8xERZ3a0o=
shared-rand-previous-value CY5TncVAltDpkBKZUBYT1canvqmVoNuweiKVZIilHfs=

Thanks to atagar, the Stem Library version 1.5.0 supports parsing the shared random values from the consensus. See here for more information!

Voluntarily, we haven't exposed those values to the control port yet and will wait for a full stable release cycle in order to make sure it's stable enough for a third party application to use them (https://trac.torproject.org/19925).

Mandatory ntor handshake

This is another important security feature introduced in the new release. Authorities, relays and clients now require ntor keys in all descriptors, for all hops, for all circuits, and for all other roles.

In other words, except for onion services (and this will be addressed with the next generation), only ntor is used--now finally dropping the TAP handshake.

This results in better security for the overall network and users.

Enjoy this new release!


Please note that the comment area below has been archived.

No (unless they do something esoteric like trying to track round-trip timing to guess how many hops are involved).

In the glorious future, when we're using next-generation onion services, the new onion descriptors will have a field where the onion service can say "I'm being a single onion service", and then clients will be able to predict it:

December 20, 2016

In reply to arma


You write, "where an onion service CAN say..." (emphasis added)

This field should be forced to accuracy and uneditable by the service. I remain disenchanted with the idea that a client can swap between three hops and six hops with the client ignorant of when such swapping occurs and how it may impact their anonymity.

You, the Tor client visiting the onion service, still make your own three-hop circuit, which is where your anonymity comes from.

So in the case where you want protection from the onion service, you can't rely on the 3 hops it chooses anyway, because it could be choosing them to hurt you.

You're right that for the case where you want protection from some external attacker, it's not totally clear that just your three hops provides less or the same or more privacy than the full six hops. But I think for most attacks, if your three hops are good then you're good, and if your three hops are not good then you're not good. I would welcome somebody coming up with an attack where the distinction matters in practice. Thanks!

"Malicious activity has been detected from your computer or another computer on your network.

"Your computer may be compromised with a virus and part of a botnet, sending spam or attacking websites. We recommend for you to update your anti-virus software and perform a full scan.

"Complete the challenge below to be granted temporary access to this website."

I think the "attack" would go like this. (I put attack in quotes because the onion service operator should be very very aware of the fact this is possible and doesn't mind.)

- User runs a relay
- User knows that foobar.onion is a single onion service
- User builds a circuit to foobar.onion, using his relay as the rendezvous point

December 22, 2016

In reply to pastly


> - User builds a circuit to foobar.onion, using his relay as the rendezvous point

I thought the rendezvous point was chosen by the Tor node hosting the hidden service, not the one connecting to it? Or am I confusing that with the introduction point? Of course, the hidden service could by chance choose a "malicious" user's relay as the rendezvous point, but I don't understand how the user could make that happen on demand.

The service chooses the intro points, and the client chooses the rendezvous point.

For the final circuit (the one the actual data goes over, aka the rendezvous circuit), it's:

User <-> User's Guard <-> User-picked Middle <-> User-picked rendezvous point <-> Service-picked middle <-> Service-picked middle <-> Service's Guard <-> Service.

December 20, 2016


So, in other words this option is for the users that use Tor for normal browsing and not for complicated stuff. This is what I understand. Part me if I'm wrong...and if I'm right i need to ask when this function is going to be integrated in the Onion Browser made by Mike Tidas because from what I readed from the other post it looks like Mike Tidas who works at ProPublica is in collaboration with you for providing the Tor browser FOR IOS.
A question I have is why you don't make an official Tor browser or at least put some work in the existent Onion browser we already have ( trying to perfecting it)? There are many options in the app store but because this apps are not made by Tor project it very hard to trust them. This is why I and probably other users want something original.

December 20, 2016


Tor Browser's Tor circuit box can't distinguish between hidden service and single hop onion service.

December 21, 2016


I stopped by the OFTC #tor IRC chatroom on behalf of the Retroshare project which has a great many users now running Retroshare hidden nodes via tor, to ask about the new tor shared randomness project to see if that is something the Retroshare developers would be interested in adding a more random base for advanced entropy pools on encryption.

The #tor chat ops didn't know anything about the project and began pushing if off on a 'game' then began challenging me to post the link as they apparently didn't believe it existed. I just remember scanning it by reading some early Tor blog entrys months earlier and it took me a day to locate it and return to the OFTC IRC #tor chatroom and post the link, explaining the project in the #tor chatroom.

There was no response, later before leaving a user asked for help which I provided when there was no one stepping up to provide the answers and solution to their problem.

I returned the next day and found I was gagged by the ops, prevented from posting, prevented from any comments whatsoever and remained so months later to present.

This for 'asking about information regarding the Tor Shared Randomness project in the 'Tor' chatroom.

I own 5 chatrooms in 5 different IRC servers including the #retroshare OFTC chatroom, if anyone wishes to address this issue and behavior to me, you can find me there or any of a dozen Retroshare Chat lobbys, channels and forums where I have helped and advised hundreds of users that have wanted to setup their applications including Retroshare routed through the tor router.

I also wrote the tor guide for Retroshare users found on multiple websites which have helped countless new users globally become introduced to tor and able to use the Retroshare communications global network via tor.

December 27, 2016


Why not to use cryptocurrencies as randomness sources?

A) There are potentially many new dependencies which are brought in, in terms of engineering complexity, with a design like that.

B) Systems like Bitcoin are not immune to attack to influence the next block, and new attacks continue to show up. So the security analysis becomes quite complex. Also, since our security goals would not be the same as those of the foocoin design, they might end up changing something that hurts us but doesn't hurt them.

C) The security of a particular cryptocurrency is a function of its popularity. So even if bitcoin is popular this year, who knows about three or four years from now? If suddenly bitcoin is boring and foocoin is exciting, and people leave bitcoin, we're left with a cryptocurrency that's surprisingly more vulnerable than we expected.