Tor design proposals: how we make changes to our protocol

(This post is likely to be interesting for folks who want to know how the Tor software is made.)

At the core of the Tor software lies the network protocol that Tor relays and clients use to talk to each other. We try to make sure we have a good set of specification documents for the protocol at all times, so that other people can write compatible software that interoperates with Tor.

Once upon a time, we used to change these specification documents whenever we changed the software's behavior. That didn't go well. We would have changes in the spec that we had forgotten to make in the software, and tons of inline comments where we argued about whether a particular paragraph was a good idea.

So back in 2007, we introduced a lightweight change-proposal process: to make a change to the Tor protocol, you write up a little design document, and send it to the tor-dev mailing list. Once it meets editorial quality, it can go into the proposals repository. Once it's implemented, it can be merged into the spec.

There are a lot of proposals to look at, though. The current set of open proposals has almost 100,000 words in it! (That's almost half again as long as the Tor specs themselves.)

To help people navigate through this pile of design proposals, I started to write a regular "proposal status" email to explain what all of the open proposals are about. Last year, however, I fell out of the habit. Tonight, I've tried to fall back in: here's the latest proposal status writeup.

Below the cut, my summaries of the still-open proposals that have been added in the past couple of year -- and thanks to all the busy designers who have been working to think of ways to improve Tor!

228 Cross-certifying identity keys with onion keys

This proposal signs each server's identity key with its onion
keys, to prove onion key ownership in the router descriptor.
It's not clear that this actually improves security, but it
fixes an annoying gap in our key authentication. I have it coded
up in my #12498 branch, targeting 0.2.7. (2/2015)

229 Further SOCKS5 extensions

Here's a nice idea for how we can support a new SOCKS5 protocol
extension to relay information between clients and Tor, and
between Tor and pluggable transports, more effectively. It
also adds some additional SOCKS5 error codes. There are some
open questions to answer. "Trunnel" has an implementation of
the protocol extension formats in its examples directory. (2/2015)

230 How to change RSA1024 relay identity keys [DRAFT]
231 Migrating authority RSA1024 identity keys [DRAFT]

Who remembers the OpenSSL "Heartbleed" vulnerability?
These proposals I wrote try to explain safer mechanisms for a bunch
of servers to migrate their RSA1024 identity keys at once. I'm not
sure we'll be able to build thee, though: implementating proposal
220 above seems cleverer to me. (2/2015)

232 Pluggable Transport through SOCKS proxy [OPEN]

Arturo Filastò wrote this proposal for chaining pluggable
transports which themselves need to go through proxies. Seems
potentially useful! (2/2015)

233 Making Tor2Web mode faster [OPEN]

This one by Virgil, Fabio, and Giovanni describes a couple of ways
that Tor2Web builds of Tor can save some circuit hops that they use
today. Potentially useful for Tor2Web; any implementation needs to
be sure that it never changes the behavior of non-tor2web clients.

234 Adding remittance field to directory specification [OPEN]

Virgil, Leif, and Rob added this proposal for relays to specify
payment addresses for schemes that want to compensate relay
operators for their use of bandwidth. (2/2015)

235 Stop assigning (and eventually supporting) the Named flag [DRAFT]

This proposal is about removing the Named flag. (Thanks to
Sebastian Hahn for writing it!) The rationale is that the naming
system for relays never worked particularly well, and it had
strange and hard-to-explain security properties. We've implemented
the key part of this already: directory authorities don't assign
the Named flag any longer. Next up will be removing client support
for parsing and understanding it. (2/2015)

236 The move to a single guard node [OPEN]

This proposal suggests that to limit client fingerprinting, and to
limit opportunities for attacks, clients should use a single guard
node, rotated infrequently. This transition is in progress; we use
a single guard node for circuit traffic now, but in order to make
guards more long-lived, we need to adjust how they are chosen.
George has a patch for that as #9321, targetting inclusion into
0.2.6. (Thanks to George Kadianakis and Nicholas Hopper for
writing this one.) (2/2015)

237 All relays are directory servers [OPEN]

Matthew Finkel wrote this proposal to describe a transition to a
world where Tor relays can be directory servers without having an
open DirPort -- and eventually, where every relay can be a
DirServer. He has an implementation, possibly for 0.2.6 or 0.2.7,
in ticket #12538. (2/2015)

238 Better hidden service stats from Tor relays [DRAFT]

Here's an important one that needs many eyes. George Kadianakis,
David Goulet, Karsten Loesing, and Aaron Johnson wrote this to
describe a mechanism for the tor network to securely produce
aggregate hidden service usage statistics without exposing
sensitive intermediate results. This has an implementation in
0.2.6 and should probably be marked closed. (2/2015)

239 Consensus Hash Chaining [DRAFT]

Here's the start of a good idea that Andrea Shepard wrote up (with
some help from Nick). The idea is to make it hard even for a set
of corrupt authorities (or authority-key-thieves) to make a
personalized false consensus for a target user, by authenticating
the whole sequence as a hash chain. Others on tor-dev suggested
improvements and had good questions (thanks, Leif and Sebastian G!)

240 Early signing key revocation for directory authorities [DRAFT]

This one is another Andrea+Nick collaboration about certificate
revocation for our most sensitive keys. If an authority key needs
to be replaced, it would be great to take the old one out of
circulation as fast as possible. Peter Palfrader on tor-dev had
some ideas for making this better. (2/2015)

241 Resisting guard-turnover attacks [DRAFT]

I wrote this one with good ideas from Aaron Johnson and Rob
Jansen. It describes a way to respond to an important class of
attacks where an adversary kills off a targeted user's guards until
the user has chosen a guard the attacker controls. (See the
"Sniper Attack") paper. The defense is tricky, and if not done
right, might lead clients to kick themselves off the network out of
paranoia. So, this proposal could use more analysis. (2/2015)


February 08, 2015


As of Proposal 236, consider there is a passive adversary with the ability to listen the network connections, which just collects some statistics. Then it would be simpler for him to observe the anomalous percentage of obfuscated/encrypted network traffic linked to a single guard node address!

Would it? I don't think that's true. I think it would bring the number of funny-looking flows from three down to one. (Three funny-looking flows is an even weirder fingerprint, right?)

I believe it will not pass through but anyway...
Can YOU explain this marking of tor tcp connections
Flags [S], seq ..., win ..., options [...], length 0
Flags [S.], seq ..., ack ..., win ..., options [...], length 0
Flags [.], ack 1, win ..., length 0
=== usual handshake but attention:
Flags [P.], seq 1:4, ack 1, win ..., length 3
Why does tor explicitly mark its connections!?

well, it can be just 'modern style coding' - fast and dirty.
there is no need for tor to mark connections - they are all identifiable by publicly available source/destination ip. so you can cease access for everyone but tor exits.
i checked your example and found out that local tor proxy buffers all data before replying to the client about connection establishing result. then if connection with the server was established it feeds server these buffered data and informs the client about connection success.
at that point the client _start_ communicate with the server but server already has data!
for now you can exploit this 'option': just concatenate your request to tor proxy with a command to the server and let tor do work for you.
for sure tor designers will close that hole by dropping all buffered pre-connection data not intended to be seen on the server but for now... enjoy...
"client can do anything - server ought to survive because _all_ packets are specially crafted"

i checked proxy which i had used in testing and can confirm it is 'Tor' so it has this 'feature'. I have looked through these proposals and can say the same '''modern style coding' - fast and dirty".
to start with i don't get how tor became just http proxy! what about https and others ssl based connections? so they don't seem to be worth reading and tor has just become deviation of socks protocol. anyway i can suppose it was more logical and unintrusive to add new cmd code say 201 to connection request and modify just tbb client instead of forcing all {not http}applications to adapt to changed 'private socks' protocol specs. ( microsoft way? as in 'do what i say' )
as i guess first message was probably the reaction on some kind of compiler optimization like expanding socks4.request structure to 32 bit boundary so a server got _exactly_ 3 extra bytes. now somebody has upgraded tor proxy and these bytes turned out to be exposed.
"client can do anything - server ought to survive because _all_ packets are specially crafted"

if you want to set 3 tor connections you can start 3 tor proxy and load 2 with random traffic.
anyway i suppose you should use _only_ encrypted connections so all traffic will be encrypted!
"passive adversary" is just your isp. well it is active by the way: they can connect from your active ip address; they can set service with your active ip address.
and why should they "observe"? its known for sure that you are connecting to tor.

"client can do anything - server ought to survive because _all_ packets are specially crafted"


February 08, 2015


The new proposal looks interesting, but what's the reason Tor currently uses RSA1024 instead of a higher key size or ECC?

many thanks for your long-term, continued willingness to engage with good-faith questions/comments substantively and knowledgeably!

it needs to be noted that this sort of stuff can take up lots of time. i want you to know that people see, value, and very much respect your continued willingness to engage with folks in this way!

Everyone wonders about this, but no one gets a decent answer. There is no rational reason what so ever to still use 1024 bit RSA keys, unless you intend for the product's cryptography to be easily broken.

The answer is "yes, this is an issue, here is a proposed design, please help evaluate the design, and also please help us implement and deploy it."

Tor is made for and by a community of people. You can either help, or sit and wait, or I guess option three is to complain and then do one of those things. :)

(I should also clarify that we've moved beyond 1024-bit RSA on the link encryption and the circuit encryption, so it's only the relay identity keys that remain. It's still an issue for sure, but it's not straightforward to undermine Tor by factoring a relay's identity key.)


February 10, 2015


Regarding proposal 236, TAILS users rebooting daily would evade any fix to pin down guard nodes for longer periods. Would it be a simple solution to have TAILS ship with a suitably high bandwidth entry node fixed by setting it in the torrc? Then, if the entry node is changed for each release of TAILS, the guard will at least be changed, and infrequently. I understand this means all TAILS users will end up clobbering the same unlucky entry node, but I suppose that as they make up perhaps a percent of all Tor users, it's not such an imbalance, and they would pick on each node for only four to six weeks at a time.

Or is giving away that all TAILS users will be using a particular entry node for the next month or so not so smart after all?

I wonder if later on, or instead, TAILS should implement pluggable transports as standard.

-- Straggler

P.s., I'm sorry to bother you here about TAILS, but as I'm not able to do e-mail or chat, I've found the withdrawal of the old TAILS forum at to be a real problem raising matters. I thought there was supposed to be a replacement here at, but evidently not.

For what it's worth, you could try posting to and tag the post with tails. I do hope Tails developers periodically check into posts tagged with tails on this site. (I've seen whonix developers respond there periodically on posts tagged with whonix)

You definitely don't want every Tails user using the same bridge.

People have batted around different ideas. One of the more intriguing ones is to make your bridge choice a function of your local hardware or something, so you naturally gravitate to the same bridge but it doesn't keep Tor state in the traditional way. There are of course some complications to the idea.

That's the solution that came to my mind. An over-simplistic example:
node_t chosen_guard;
int d, delta;
for guard in available_guards {
if ((d=xor(sha256(hw_mac_addr), sha256(get_node_id(guard))) < delta) {
chosen_guard = guard;
delta = d;

I would be interested in reading more about the benefits and risks of selecting guard nodes this way.

Wouldn't it make more sense to use the public IP address instead of the local MAC address? This way wouldn't leak information about a particular machine connecting from various networks. Also all clients connecting from the same network would share the same guard, which seems like a good thing. The point of guard nodes is to lessen the chance of an adversary controlling your entry node (or ISP) and exit node (or endpoint).

What, because you've heard of an NSA building in Utah, and you have heard of nothing else? :)

There are many other things in Utah. For example, there's a nice guy named Jesse who is a grad student at a university, who does computer science research and who runs a volunteer bridge for meek.

yep and they conform to your decision. use your personal highly sophisticated equipment - human brain. try to imagine they can generate traffic sourced from tor exit to discredit tor users. There are no delays, no establishing circuits processes - just connections...
trying ... failure; trying ... failure; trying ... reboot.


February 12, 2015


"241 Resisting guard-turnover attacks [DRAFT]
[...]The defense is tricky, and if not done
right, might lead clients to kick themselves off the network out of
paranoia. So, this proposal could use more analysis. (2/2015)"

May auto(-rotation) gets in to much vodoo?
May make it easier to choose manually? Vidalia wasn't wrong!


February 12, 2015


"Vidalia laid to rest
[...]Tails team has been working on a simple interface to replace one of the most-missed features of the defunct program, the circuit visualization window."…

BUT please fullfeatured Vidalia like!
Minimal requirements:
torcc editable!MUST!Otherwise,big problems
All circuits,nodes,visible,graphical + 'Close Circuits (Del)'


February 13, 2015


A bit random but what's up with Pwn2Own and TBB? No go for 2015? But mos def for 2016?

Thank for the info.

Tor Project doesn't have that prize cash to hand out to winners like they do in these fancy contests.
But, surely whatever major hole they find in Firefox can be applied(i.e fixed) in Tor-Browser also, because TBB is based on Firefox.


February 23, 2015


Tails - Vidalia laid to rest

Tails without torcc editable,without visible circuits/cc,without 'Close Circuits (Del)' and without 2 different "New Identity" would be a GADGET only )-:
Using Bridges is NO replacement.