A couple of months ago, we created a proposal for a Tor Q&A page on Stack Exchange. The proposal moved into the commitment-phase shortly after, but we need more help to move the page into a live beta. If you would like to see a Q&A site for Tor, please visit our proposal page and click the "Commit!"-button: http://area51.stackexchange.com/proposals/56447/tor
Welcome to the eleventh issue of Tor Weekly News, the weekly newsletter that covers what is happening in the taut Tor community.
tor 0.2.4.17-rc is out
There are now confirmations that the sudden influx of Tor clients which started mid-August is indeed coming from a botnet. “I guess all that work we’ve been doing on scalability was a good idea,” wrote Roger Dingledine in a blog post about how to handle millions of new Tor clients.
On September 5th, Roger Dingledine announced the release of the third release candidate for the tor 0.2.4 series. This is an emergency release “to help us tolerate the massive influx of users: 0.2.4 clients using the new (faster and safer) ‘NTor’ circuit-level handshakes now effectively jump the queue compared to the 0.2.3 clients using ‘TAP’ handshakes”.
It also contains several minor bugfixes and some new status messages for better monitoring of the current situation.
Roger asked relay operators to upgrade to 0.2.4.17-rc : “the more relays that upgrade to 0.2.4.17-rc, the more stable and fast Tor will be for 0.2.4 users, despite the huge circuit overload that the network is seeing.”
For relays running Debian or Ubuntu, upgrading to the development branch can be done using the Tor project’s package repository. New versions of the beta branch of the Tor Browser Bundle are also available since September 6th. The next Tails release, scheduled for September 19th will also contain tor 0.2.4.17-rc.
Hopefully, this will be the last release candidate. What looks missing at this point to declare the 0.2.4.x series stable is simply enough time to finish the release notes.
The future of Tor cryptography
After the last round of revelations from Edward Snowden, described as “explosive” by Bruce Schneier, several threads started on the tor-talk mailing list to discuss Tor cryptography.
A lot of what has been written is speculative at this point. But some have raised concerns about 1024 bit Diffie–Hellman key exchange. This has already been addressed with the introduction of the “ntor” handshake in 0.2.4 and Nick Mathewson encourages everybody to upgrade.
Another thread prompted Nick to summarize his views on the future of Tor cryptography. Regarding public keys, “with Tor 0.2.4, forward secrecy uses 256-bit ECC, which is certainly better, but RSA-1024 is still used in some places for signatures. I want to fix all that in 0.2.5 — see proposal 220, and George Kadianakis’ draft hidden service improvements (descriptors, identity keys), and so forth.” Regarding symmetric keys, Nick wrote: “We’re using AES128. I’m hoping to move to XSalsa20 or something like it.” In response to a query, Nick clarifies that he doesn’t think AES is broken: only hard to implement right, and only provided in TLS in concert with modes that are somewhat (GCM) or fairly (CBC) problematic.
The effort to design better cryptography for the Tor protocols is not new. More than a year ago, Nick Mathewson presented proposal 202 outlining two possible new relay encryption protocols for Tor cells. Nick mentioned that he’s waiting for a promising paper to get finished here before implementation.
A third question was raised regarding the trust in algorithms certified by the US NIST. Nick’s speculations put aside, he also emphasized that several NIST algorithms were “hard to implement correctly”.
Nick also plans to change more algorithms: “Over the 0.2.5 series, I want to move even more things (including hidden services) to curve25519 and its allies for public key crypto. I also want to add more hard-to-implement-wrong protocols to our mix: Salsa20 is looking like a much better choice to me than AES nowadays, for instance.”
Nick concluded one of his emails with the words: “these are interesting times for crypto”, which sounds like a good way to put it.
Toward a better performance measurement tool
“I just finished […] sketching out the requirements and a software design for a new Torperf implementation“ announced Karsten Loesing on the tor-dev mailing list.
The report begins with: “Four years ago, we presented a simple tool to measure performance of the Tor network. This tool, called Torperf, requests static files of three different sizes over the Tor network and logs timestamps of various request substeps. These data turned out to be quite useful to observe user-perceived network performance over time. However, static file downloads are not the typical use case of a user browsing the web using Tor, so absolute numbers are not very meaningful. Also, Torperf consists of a bunch of shell scripts which makes it neither very user-friendly to set up and run, nor extensible to cover new use cases.”
The specification lays out the various requirements for the new tool, and details several experiments like visiting high profile websites with an automated graphical web browser, downloading static files, crafting a canonical web page, measuring hidden service performance, and checking on upload capacity.
Karsten added “neither the requirements nor the software design are set in stone, and the implementation, well, does not exist yet. Plenty of options for giving feedback and helping out, and most parts don’t even require specific experience with hacking on Tor. Just in case somebody’s looking for an introductory Tor project to hack on.”
Sathya already wrote that this was enough material to get the implementation started. The project needs enough work that anyone interested should get involved. Feel free to join him!
More monthly status reports for August 2013
The wave of regular monthly reports from Tor project members continued this week with Sukhbir Singh, Matt Pagan, Ximin Luo, mrphs, Pearl Crescent, Andrew Lewman, Mike Perry, Kelley Misata, Nick Mathewson, Jason Tsai, Tails, Aaron, and Damian Johnson.
Not all new Tor users are computer programs! According to their latest report, Tails is now booted twice as much as it was six months ago (from 100,865 to 190,521 connections to the security feed).
With the Google Summer of Code ending in two weeks, the students have sent their penultimate reports: Kostas Jakeliunas for the Searchable metrics archive, Johannes Fürmann for EvilGenius, Hareesan for the Steganography Browser Extension, and Cristian-Matei Toader for Tor capabilities .
Damian Johnson announced that he had completed the rewrite of DocTor in Python, “a service that pulls hourly consensus information and checks it for a host of issues (directory authority outages, expiring certificates, etc). In the case of a problem it notifies
tor-consensus-health@, and we in turn give the authority operator a heads up.”
In his previous call for help to collect more statistics, addressed to bridge operators, George Kadianakis forgot to mention that an extra line with “ExtORPort 6669” needed to be added to the tor configuration file. Make sure you do have it if you are running a bridge on the tor master branch.
For the upgrade of tor to the 0.2.4.x series in Tails, a tester spotted a regression while “playing with an ISO built from experimental, thanks to our Jenkins autobuilder”. This marks a significant milestone in the work on automated builds done by several members of the Tails team in the course of the last year!
Tails’ next “low-hanging fruit” session will be on September 21st at 08:00 UTC. Mark the date if you want to get involved!
Marek Majkowski reported on how one can use his fluxcapacitor tool to get a test Tor network started with Chutney ready in only 6.5 seconds. A vast improvement over the 5 minutes he initially had to wait!
Eugen Leitl drew attention to a new research paper which aims to analyze the content and popularity of Hidden Services by Alex Biryukov, Ivan Pustogarov, and Ralf-Philipp Weinmann from the University of Luxembourg.
Tor Help Desk roundup
The Tor help desk had a number of emails this week asking about the recent stories in the New York Times, the Guardian, and ProPublica regarding NSA’s cryptographic capabilities. Some users asked whether there was a backdoor in Tor. Others asked if Tor’s crypto was broken.
There is absolutely no backdoor in Tor. Tor project members have been vocal in the past about how tremendously irresponsible it would be to backdoor our users. As it is a frequently-asked question, users have been encouraged to read how the project would respond to institutional pressure.
The Tor project does not have any more facts about NSA’s cryptanalysis capabilities than what has been published in newspapers. Even if there is no actual evidence that Tor encryption is actually broken, the idea is to remain on the safe side by using more trusted algorithms for the Tor protocols. See above for a more detailed write-up.
Help the Tor community!
Tor is about protecting everyone’s freedom and privacy. There are many ways to help but getting involved in such a busy community can be daunting. Here’s a selection of tasks on which one could get started:
Get tor to log the source of control port connections. It would help in developing controller applications or libraries (like Stem ) to know which program is responsible for a given access to the control facilities of the tor daemon. Knowledge required: C programming, basic understanding of network sockets.
Diagnose what is currently wrong with Tor Cloud images. Tor Cloud is an easy way to deploy bridges and it looks like the automatic upgrade procedure caused problems. Let’s make these virtual machines useful again for censored users. Knowledge required: basic understanding of Ubuntu system administration.
This issue of Tor Weekly News has been assembled by Lunar, dope457, mttp, malaparte, harmony, Karsten Loesing, and Nick Mathewson.
Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!
Many Tor users and various press organizations are asking about one slide in a Brazillian TV broadcast. A graduate student in law and computer science at Stanford University, Jonathan Mayer, then speculated on what this "QUICK ANT" could be. Since then, we've heard all sorts of theories.
We've seen the same slides as you and Jonathan Mayer have seen. It's not clear what the NSA or GCHQ can or cannot do. It's not clear if they are "cracking" the various crypto used in Tor, or merely tracking Tor exit relays, Tor relays as a whole, or run their own private Tor network.
What we do know is that if someone can watch the entire Internet all at once, they can watch traffic enter tor and exit tor. This likely de-anonymizes the Tor user. We describe the problem as part of our FAQ.
We think the most likely explanation here is that they have some "Tor flow detector" scripts that let them pick Tor flows out of a set of flows they're looking at. This is basically the same problem as the blocking-resistance problem — they could do it by IP address ("that's a known Tor relay"), or by traffic fingerprint ("that looks like TLS but look here and here how it's different"), etc.
It's unlikely to have anything to do with deanonymizing Tor users, except insofar as they might have traffic flows from both sides of the circuit in their database. However, without concrete details, we can only speculate as well. We'd rather spend our time developing Tor and conducting research to make a better Tor.
Thanks to Roger and Lunar for edits and feedback on this post.
Many people set up new fast relays and then wonder why their bandwidth is not fully loaded instantly. In this post I'll walk you through the lifecycle of a new fast non-exit relay, since Tor's bandwidth estimation and load balancing has gotten much more complicated in recent years. I should emphasize that the descriptions here are in part anecdotal — at the end of the post I ask some research questions that will help us make better sense of what's going on.
I hope this summary will be useful for relay operators. It also provides background for understanding some of the anonymity analysis research papers that people have been asking me about lately. In an upcoming blog post, I'll explain why we need to raise the guard rotation period (and what that means) to improve Tor's anonymity. [Edit: here is that blog post]
A new relay, assuming it is reliable and has plenty of bandwidth, goes through four phases: the unmeasured phase (days 0-3) where it gets roughly no use, the remote-measurement phase (days 3-8) where load starts to increase, the ramp-up guard phase (days 8-68) where load counterintuitively drops and then rises higher, and the steady-state guard phase (days 68+).
Phase one: unmeasured (days 0-3).
When your relay first starts, it does a bandwidth self-test: it builds four circuits into the Tor network and back to itself, and then sends 125KB over each circuit. This step bootstraps Tor's passive bandwidth measurement system, which estimates your bandwidth as the largest burst you've done over a 10 second period. So if all goes well, your first self-measurement is 4*125K/10 = 50KB/s. Your relay publishes this number in your relay descriptor.
The directory authorities list your relay in the network consensus, and clients get good performance (and balance load across the network) by choosing relays proportional to the bandwidth number listed in the consensus.
Originally, the directory authorities would just use whatever bandwidth estimate you claimed in your relay descriptor. As you can imagine, that approach made it cheap for even a small adversary to attract a lot of traffic by simply lying. In 2009, Mike Perry deployed the "bandwidth authority" scripts, where a group of fast computers around the Internet (called bwauths) do active measurements of each relay, and the directory authorities adjust the consensus bandwidth up or down depending on how the relay compares to other relays that advertise similar speeds. (Technically, we call the consensus number a "weight" rather than a bandwidth, since it's all about how your relay's number compares to the other numbers, and once we start adjusting them they aren't really bandwidths anymore.)
The bwauth approach isn't ungameable, but it's a lot better than the old design. Earlier this year we plugged another vulnerability by capping your consensus weight to 20KB until a threshold of bwauths have an opinion about your relay — otherwise there was a several-day window where we would use your claimed value because we didn't have anything better to use.
So that's phase one: your new relay gets basically no use for the first few days of its life because of the low 20KB cap, while it waits for a threshold of bwauths to measure it.
Phase two: remote measurement (days 3-8).
Remember how I said the bwauths adjust your consensus weight based on how you compare to similarly-sized relays? At the beginning of this phase your relay hasn't seen much traffic, so your peers are the other relays who haven't seen (or can't handle) much traffic. Over time though, a few clients will build circuits through your relay and push some traffic, and the passive bandwidth measurement will provide a new larger estimate. Now the bwauths will compare you to your new (faster) peers, giving you a larger consensus weight, thus driving more clients to use you, in turn raising your bandwidth estimate, and so on.
Tor clients generally make three-hop circuits (that is, paths that go through three relays). The first position in the path, called the guard relay, is special because it helps protect against a certain anonymity-breaking attack. Here's the attack: if you keep picking new paths at random, and the adversary runs a few relays, then over time the chance drops to zero that *every single path you've made* is safe from the adversary. The defense is to choose a small number of relays (called guards) and always use one of them for your first hop — either you chose wrong, and one of your guards is run by the adversary and you lose on many of your paths; or you chose right and all of your paths are safe. Read the Guard FAQ for more details.
Only stable and reliable relays can be used as guards, so no clients are willing to use your brand new relay as their first hop. And since in this story you chose to set up a non-exit relay (so you won't be the one actually making connections to external services like websites), no clients will use it as their third hop either. That means all of your relay's traffic is coming from being the second hop in circuits.
So that's phase two: once the bwauths have measured you and the directory authorities lift the 20KB cap, you'll attract more and more traffic, but it will still be limited because you'll only ever be a middle hop.
Phase three: Ramping up as a guard relay (days 8-68).
This is the point where I should introduce consensus flags. Directory authorities assign the Guard flag to relays based on three characteristics: "bandwidth" (they need to have a large enough consensus weight), "weighted fractional uptime" (they need to be working most of the time), and "time known" (to make attacks more expensive, we don't want to give the Guard flag to relays that haven't been around a while first). This last characteristic is most relevant here: on today's Tor network, you're first eligible for the Guard flag on day eight.
Clients will only be willing to pick you for their first hop if you have the "Guard" flag. But here's the catch: once you get the Guard flag, all the rest of the clients back off from using you for their middle hops, because when they see the Guard flag, they assume that you have plenty of load already from clients using you as their first hop. Now, that assumption will become true in the steady-state (that is, once enough clients have chosen you as their guard node), but counterintuitively, as soon as you get the Guard flag you'll see a dip in traffic.
Why do clients avoid using relays with the Guard flag for their middle hop? Clients look at the scarcity of guard capacity, and the scarcity of exit capacity, and proportionally avoid using relays for positions in the path that aren't scarce. That way we allocate available resources best: relays with the Exit flag are used mostly for exiting when they're scarce, and relays with the Guard flag are used mostly for entering when they're scarce.
It isn't optimal to allow this temporary dip in traffic (since we're not taking advantage of resources that you're trying to contribute), but it's a short period of time overall: clients rotate their guards nodes every 4-8 weeks, so pretty soon some of them will rotate onto your relay.
To be clear, there are two reasons why we have clients rotate their guard relays, and the reasons are two sides of the same coin: first is the above issue where new guards wouldn't see much use (since only new clients, picking their guards for the first time, would use a new guard), and second is that old established guards would accrue an ever-growing load since they'd have the traffic from all the clients that ever picked them as their guard.
One of the reasons for this blog post is to give you background so when I later explain why we need to extend the guard rotation period to many months, you'll understand why we can't just crank up the number without also changing some other parts of the system to keep up. Stay tuned for more details there, or if you don't want to wait you can read the original research questions and the followup research papers by Elahi et al and Johnson et al.
Phase four: Steady-state guard relay (days 68+).
Once your relay has been a Guard for the full guard rotation period (up to 8 weeks in Tor 0.1.1.11-alpha through 0.2.4.11-alpha, and up to 12 weeks in Tor 0.2.4.12-alpha and later), it should reach steady-state where the number of clients dropping it from their guard list balances the number of clients adding it to their guard list.
Research question: what do these phases look like with real-world data?
All of the above explanations, including the graphs, are just based on anecdotes from relay operators and from ad hoc examination of consensus weights.
Here's a great class project or thesis topic: using our publically available Tor network metrics data, track the growth pattern of consensus weights and bandwidth load for new relays. Do they match the phases I've described here? Are the changes inside a given phase linear like in the graphs above, or do they follow some other curve?
Are there trends over time, e.g. it used to take less time to ramp up? How long are the phases in reality — for example, does it really take three days before the bwauths have a measurement for new relays?
How does the scarcity of Guard or Exit capacity influence the curves or trends? For example, when guard capacity is less scarce, we expect the traffic dip at the beginning of phase three to be less pronounced.
How different were things before we added the 20KB cap in the first phase, or before we did remote measurements at all?
Are the phases, trends, or curves different for exit relays than for non-exit relays?
The "steady-state" phase assumes a constant number of Tor users: in situations where many new users appear (like the botnet invasion in August 2013), current guards will get unbalanced. How quickly and smoothly does the network rebalance as clients shift guards after that?
There's a new Tor 0.2.4.17-rc to hopefully help mitigate some of the problems with the botnet issues Tor is experiencing. All packages, including the beta Tor Browser Bundles, have been updated. Relay operators are strongly encouraged to upgrade to the latest versions, since it mostly has server-side improvements in it, but users will hopefully benefit from upgrading too. Please try it out and let us know.
Tor Browser Bundle (2.4.17-beta-1)
- Update Tor to 0.2.4.17-rc
- Update NoScript to 126.96.36.199
- Update HTTPS Everywhere to 4.0development.11
[tl;dr: if you want your Tor to be more stable, upgrade to a Tor Browser Bundle with Tor 0.2.4.x in it, and then wait for enough relays to upgrade to today's 0.2.4.17-rc release.]
Starting around August 20, we started to see a sudden spike in the number of Tor clients. By now it's unmistakable: there are millions of new Tor clients and the numbers continue to rise:
Where do these new users come from? My current best answer is a botnet.
Some people have speculated that the growth in users comes from activists in Syria, Russia, the United States, or some other country that has good reason to have activists and journalists adopting Tor en masse lately. Others have speculated that it's due to massive adoption of the Pirate Browser (a Tor Browser Bundle fork that discards most of Tor's security and privacy features), but we've talked to the Pirate Browser people and the downloads they've seen can't account for this growth. The fact is, with a growth curve like this one, there's basically no way that there's a new human behind each of these new Tor clients. These Tor clients got bundled into some new software which got installed onto millions of computers pretty much overnight. Since no large software or operating system vendors have come forward to tell us they just bundled Tor with all their users, that leaves me with one conclusion: somebody out there infected millions of computers and as part of their plan they installed Tor clients on them.
It doesn't look like the new clients are using the Tor network to send traffic to external destinations (like websites). Early indications are that they're accessing hidden services — fast relays see "Received an ESTABLISH_RENDEZVOUS request" many times a second in their info-level logs, but fast exit relays don't report a significant growth in exit traffic. One plausible explanation (assuming it is indeed a botnet) is that it's running its Command and Control (C&C) point as a hidden service.
My first observation is "holy cow, the network is still working." I guess all that work we've been doing on scalability was a good idea. The second observation is that these new clients actually aren't adding that much traffic to the network. Most of the pain we're seeing is from all the new circuits they're making — Tor clients build circuits preemptively, and millions of Tor clients means millions of circuits. Each circuit requires the relays to do expensive public key operations, and many of our relays are now maxed out on CPU load.
There's a possible dangerous cycle here: when a client tries to build a circuit but it fails, it tries again. So if relays are so overwhelmed that they each drop half the requests they get, then more than half the attempted circuits will fail (since all the relays on the circuit have to succeed), generating even more circuit requests.
So, how do we survive in the face of millions of new clients?
Step one was to see if there was some simple way to distinguish them from other clients, like checking if they're using an old version of Tor, and have entry nodes refuse connections from them. Alas, it looks like they're running 0.2.3.x, which is the current recommended stable.
Step two is to get more users using the NTor circuit-level handshake, which is new in Tor 0.2.4 and offers stronger security with lower processing overhead (and thus less pain to relays). Tor 0.2.4.17-rc comes with an added twist: we prioritize NTor create cells over the old TAP create cells that 0.2.3 clients send, which a) means relays will get the cheap computations out of the way first so they're more likely to succeed, and b) means that Tor 0.2.4 users will jump the queue ahead of the botnet requests. The Tor 0.2.4.17-rc release also comes with some new log messages to help relay operators track how many of each handshake type they're handling.
(There's some tricky calculus to be done here around whether the botnet operator will upgrade his bots in response. Nobody knows for sure. But hopefully not for a while, and in any case the new handshake is a lot cheaper so it would still be a win.)
Step three is to temporarily disable some of the client-side performance features that build extra circuits. In particular, our circuit build timeout feature estimates network performance for each user individually, so we can tune which circuits we use and which we discard. First, in a world where successful circuits are rare, discarding some — even the slow ones — might be unwise. Second, to arrive at a good estimate faster, clients make a series of throwaway measurement circuits. And if the network is ever flaky enough, clients discard that estimate and go back and measure it again. These are all fine approaches in a network where most relays can handle traffic well; but they can contribute to the above vicious cycle in an overloaded network. The next step is to slow down these exploratory circuits in order to reduce the load on the network. (We would temporarily disable the circuit build timeout feature entirely, but it turns out we had a bug where things get worse in that case.)
Step four is longer-term: there remain some NTor handshake performance improvements that will make them faster still. It would be nice to get circuit handshakes on the relay side to be really cheap; but it's an open research question how close we can get to that goal while still providing strong handshake security.
Of course, the above steps aim only to get our head back above water for this particular incident. For the future we'll need to explore further options. For example, we could rate-limit circuit create requests at entry guards. Or we could learn to recognize the circuit building signature of a bot client (maybe it triggers a new hidden service rendezvous every n minutes) and refuse or tarpit connections from them. Maybe entry guards should demand that clients solve captchas before they can build more than a threshold of circuits. Maybe we rate limit TAP handshakes at the relays, so we leave more CPU available for other crypto operations like TLS and AES. Or maybe we should immediately refuse all TAP cells, effectively shutting 0.2.3 clients out of the network.
In parallel, it would be great if botnet researchers would identify the particular characteristics of the botnet and start looking at ways to shut it down (or at least get it off of Tor). Note that getting rid of the C&C point may not really help, since it's the rendezvous attempts from the bots that are hurting so much.
And finally, I still maintain that if you have a multi-million node botnet, it's silly to try to hide it behind the 4000-relay Tor network. These people should be using their botnet as a peer-to-peer anonymity system for itself. So I interpret this incident as continued exploration by botnet developers to try to figure out what resources, services, and topologies integrate well for protecting botnet communications. Another facet of solving this problem long-term is helping them to understand that Tor isn't a great answer for their problem.
Welcome to the tenth issue of Tor Weekly News, the weekly newsletter that covers what is happening in the skyrocketing Tor community.
Serious network overload
<borealis> if it really is a coordinated attack from a bot twice the size of the regular tor network i'm much surprised tor is still usable at all
The tremendous influx of new clients that started mid-August is stretching the current Tor network and software to its limits.
Mike Perry wishing to “compare load characteristics since 8/19 for nodes with different types of flags” issued a call to relay operators: “especially useful [are] links/graph images for connection counts, bandwidth, and CPU load since 8/19.”
It was reported on IRC that on some relays, only one circuit was successfully created out of four attempts. This unfortunately implies that clients retry to build more circuits, resulting in even more load on Tor relays.
The tor 0.2.4 series introduced a new circuit extension handshake dubbed “ntor”. This new handshake is faster (especially on the relay side) than the original circuit extension handshake, “TAP”. Roger Dingledine came up with a patch to prioritize circuit creations using ntor over TAP. Various observers reported that these overwhelming unidentified new clients were likely to be using Tor 0.2.3. Prioritizing ntor is then likely to make them less a burden for the network, and should help the network to function despite being overloaded by circuit creations.
Sathya and Isis both reported the patch to work. Nick Mathewson pointed out a few issues in the current implementation but overall it looks like a band-aid good enough for the time being.
Latest findings regarding traffic correlation attacks
Erik de Castro Lopo pointed tor-talk readers to a new well written paper named Users Get Routed: Traffic Correlation on Tor by Realistic Adversaries. To be presented at the upcoming CCS 2013 conference this November in Berlin, Aaron Johnson, Chris Wacek, Rob Jansen, Micah Sherr, and Paul Syverson describe their experiments on traffic correlation attacks.
This research paper follows on a long series of earlier research papers to better understand how Tor is vulnerable to adversaries controlling portions of the Tor network or monitoring users and relays at the network level.
Roger Dingledine wrote to tor-talk readers: “Yes, a big enough adversary can screw Tor users. But we knew that. I think it’s great that the paper presents the dual risks of relay adversaries and link adversaries, since most of the time when people are freaking out about one of them they’re forgetting the other one. And we really should raise the guard rotation period. If you do their compromise graphs again with guards rotated every nine months, they look way different.”
One tricky question with raising guard rotation period is: “How do we keep clients properly balanced to match the guard capacities?” It is also probably another signal for any Tails supporter that wishes to help implementing guard persistence.
“I have plans for writing a blog post about the paper, to explain what it means, what it doesn’t mean, what we should do about it, and what research questions remain open” wrote Roger. Let’s stay tuned!
A peek inside the Pirate Browser
Torrent-sharing website The Pirate Bay started shipping a custom browser — the Pirate Browser — on August 10th. They advertised using Tor to circumvent censorship but unfortunately did not provide any source code for their project.
Matt Pagan examined the contents of the package in order to get a better idea of what it was. He compared the contents of the Pirate Browser 0.6b archive using cryptographic checksums to the contents of the Tor Browser Bundle 2.3.25-12 (en-US version).
According to Matt’s findings the Pirate Browser includes unmodified versions of tor 0.2.3.25 and Vidalia 0.2.20. The tor configuration contains slight deviation from the one shipped with the Tor Browser Bundle. One section labeled “Configured for speed” unfortunately shows wrong understanding of the Tor network. Roger Dingledine commented in a subsequent email: “Just for the record, the three lines here don’t help speed much (or maybe at all).”
The remaining configuration change that “probably has the biggest impact on performance“, according to Roger, excludes exit nodes from Denmark, Ireland, United Kindgom, the Netherlands, Belgium, Italy, China, Iran, Finland, and Norway. “Whether it improves or reduces performance [Roger] cannot say, though. Depends on a lot of complex variables around Internet topologies.”
The browser itself is based of Firefox 23.0, with FoxyProxy configured to use Tor only for a few specific addresses, and a few extra bookmarks.
Later, Matt also highlighted that some important extensions of the Tor Browser, namely HTTPS Everywhere, NoScript, and Torbutton were also missing from the Pirate Browser.
In any cases, the Pirate Browser is unlikely to explain the sudden influx of new Tor clients. grarpamp forwarded an email exchanged with the Pirate Browser admin contact which shows that numbers (550 000 known direct downloads) and dates (“most downloads during the first week”) do not match.
Monthly status reports for August 2013
The wave of regular monthly reports from Tor project members for the month of August has begun. Sherief Alaa released his report first, followed by reports from George Kadianakis, Lunar, Arturo Filastò, Colin C., Arlo Breault, Philipp Winter, Roger Dingledine, Karsten Loesing, and Isis Lovecruft. The latter also caught up with June, and July.
Help Desk Roundup
This week Tor help desk saw an increase in the number of users wanting to download or install Orbot. Orbot can be downloaded from the Google Play store, the Amazon App store, f-droid.org, and guardianproject.info. Guides on using Orbot can be found on the Guardian Project’s Orbot page, or on the Tor Project’s Android page. It looks like Orbot is currently inaccessible from the Google Play store in Iran. Please join the discussion on tor-talk if you have input about the latter.
All versions of the Tor Browser Bundle which include tor 0.2.4.x have been reported to work in Iran. This includes the latest Pluggable Transport Bundle, the 3.0 alpha series, and the 2.4 beta series. Follow our Farsi blog for more Iran related news.
The next Tails contributors meeting will happen on IRC on September 4th at 8pm UTC (10pm CEST). “Every one interested in contributing to Tails is welcome” to join #tails-dev on the OFTC network.
Yawning Angel has been “designing a UDP based protocol to serve as the bulk data transport for something along the lines of ‘obfs3, but over UDP’.” They are soliciting feedback on their initial draft of the Lightweight Obfuscated Datagram Protocol (LODP).
Kathy Brade and Mark Smith have released a first patch for Mozilla’s update mechanism which “successfully updated TBB on Linux, Windows, and Mac OS ‘in the lab’ using both incremental and ‘full replace’ updates.” This is meant for the 3.x series of the Tor Browser Bundle and is still a work a progress, but this is a significant milestone toward streamlined updates for TBB users.
Erinn Clark announced that the software powering trac.torproject.org has been upgraded to version 0.12.3. Among several other improvements, this new version allowed Erinn to experiment with the often requested Git integration.
David Goulet has released the second release candidate for the 2.0 rewrite of Torsocks : “Please continue to test, review and contribute it!”
Much to her surprise, Erinn Clark found a “fraudulent PGP key with [her] email address” on the keyservers. “Do not under any circumstances trust anything that may have ever been signed or encrypted with this key” of short id 0xCEE1590D. She reminded that the Tor Project official signatures are listed on the project’s website.
This issue of Tor Weekly News has been assembled by Lunar, dope457, mttp, malaparte, mrphs, bastik, Karsten Loesing, and Roger Dingledine.
Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing list if you want to get involved!
Welcome to the ninth issue of Tor Weekly News, the weekly newsletter that covers what is happening in the determined Tor community.
Orweb Security Advisory
On August 21st, Nathan Freitas from the Guardian Project issued security advisory regarding a possible anonymity flaw affecting Orweb:
“The Orweb browser app is vulnerable to leak the actual IP of the device it is on, if it loads a page with HTML5 video or audio tags on them, and those tags are set to auto-start or display a poster frame. On some versions of Android, the video and audio player start/load events happen without the user requesting anything, and the request to the URL for the media src or through image poster is made outside of the proxy settings”, wrote Nathan.
Users who use the root mode with transparent proxying, as that handles proxying the entire traffic of the entire device or a particular app are NOT affected by this flaw.
Unfortunately, the problem mentioned above hasn't been fixed yet, as there is no patch developers are happy with. According to Nathan the temporary solution is ”switch to Firefox, with the appropriate set of add-ons.” The Guardian Project has updated its website with a step by step guide on how to set this up.
“Why would anyone want a deterministic build process?”
“The short answer is: to protect against targeted attacks” introduced Mike. With automatic remote updates becoming the norm, it becomes very interesting for a malware to “distribute copies of itself to tens or even hundreds of millions of machines in a single, officially signed, instantaneous update.” The attack shifts from attacking a millions of machines to attacking the few that are involved in “software development and build processes”.
Be sure to read Mike's post to get the full picture.
Mike concludes with how deterministic builds can mitigate the issue: “in [Tor] case, any individual can use our anonymity network to privately download our source code, verify it against public signed, audited, and mirrored git repositories, and reproduce our builds exactly, without being subject to such targeted attacks. If they notice any differences, they can alert the public builders/signers, hopefully using a pseudonym or our anonymous trac account.”
Even if “it is important for Tor to set an example on this point”, Mike hopes that Linux distributions will follow in making deterministic packaging the norm.” It looks like at least NixOS and now Debian have started working on this.
Filters and the default Tor Browser search engine
Four months ago, an anonymous reporter complained that the search engine
used by default by the Tor Browser, Startpage, had a “family filter” enabled by default. The reporter pointed out that it was pretty funny “for a browser that people use to evade censorship and filters”. Another anonymous contributor quickly pointed out that the filter could be deactivated in a few clicks in Startpage preferences.
The issue got some more attention a few days ago as Nick Mathewson mentioned hearing reports that the filter was blocking “LGBT stuff, which is of course serious”. Nick further identified that the filter was blocking — among several other things — search for “The Owl and the Pussy-Cat”, “Pussy Riot”, “Dick Cheney”, “Cock Robin”, ”Gerald Cock”.
Censoring 19th century poetry and repressed Russian punk bands was enough to make Nick conclude by an euphemism: “let's kill this filter hard”.
Mike Perry had some insights: “What we're seeing here is actually a change in Google's Safesearch. It used to be on by default and quite a bit smarter about differentiating porn from non-porn.” Mike mailed Startpage people to explain the problem and suggests that they leave the filter off by default.
In the case they would leave it on, both Nick and Mike agreed that a technical workaround should be implemented to automatically deactivate the filters when using the Tor Browser.
Sudden rise in direct Tor users
On Tuesday 27th, Roger Dingledine drew attention to the huge increase of Tor clients running. It seems that their number has doubled since August 19th according to the count of directly connecting users.
According to Roger this is not just a fluke in the metrics data. The extra load on the directory authorities is clearly visible, but it does not look that the overall network performance are affected so far.
The cause is still unknown, but there are already speculations about the Pirate Browser or the new “anti-piracy” law in Russia which is in force since August 1st. As Roger pointed out, “some good solid facts would sure be useful.”
Help Desk Roundup
Users continue to have trouble verifying package signatures. One user was confused when the signature was automatically saved as a “.txt” file. Other problems included not being running the command from the correct directory, and downloading a signature that did not correspond with the downloaded file.
Users sometimes write the help desk seeking clarification about misconceptions about Tor. Examples of such misconceptions include “Is it true that Tor is illegal in the United States?” and “Is it true that Tor has been compromised by the NSA?”. Using Tor is not currently illegal anywhere. For information about the recent vulnerability, users are advised to read the recent blog post on the subject.
Not all computers currently have their clock synchronized. This means that any timestamps in the Tor protocol can unfortunately be used to fingerprint Tor users. Nick Mathewson would like to improve the situation and has sent proposal 222, aiming to eliminate “passive timestamp exposure”, for reviews.
Karsten Loesing has made further progress on “experimenting with a client and private bridge connected over uTP”. Reduced time for client to bootstrap over uTP from 2 minutes to 6 seconds and more.
Orbot's new version 12.0.5 brings identity switching-by-swiping along with a few bugfixes. It can be downloaded from Google Play or from the Guardian Project's channels.
GSoC students sent another wave of bi-weekly reports: Kostas Jakeliunas on Searchable Metrics Archive, Johannes Fürmann on EvilGenius, Hareesan on the Steganography Browser Extension, Robert on Stream-RTT, and Cristian-Matei Toader on Tor capabilities.
The Torservers.net crowdfunding campaign for Tor exit bandwidth ended on August 26th, yielding “3771,84 Euro to be spread equally across our current seven organizations” anounnced Moritz Bartl.
Kostas Jakeliunas answered George's call for help to gather more accurate bridge statistics by writing step by step instructions on how to upgrade a bridge running on a Rasberry Pi to use the tor master branch. Lunar also pointed out that — thanks to Peter Palfrader's work on setting up continuous integration — Debian packages for the tor master branch were also available and ready to be used.
This issue of Tor Weekly News has been assembled by Lunar, dope457, mttp and Karsten Loesing.
Want to continue reading TWN? Please help us create this newsletter. We still need more volunteers to watch the Tor community and report important news. Please see the project page, write down your name and subscribe to the team mailing-list if you want to get involved!