Archive - Oct 2011

Date

Research problems: Ten ways to discover Tor bridges

While we're exploring smarter ways of getting more bridge addresses, and while the bridge arms race hasn't heated up yet in most countries (or has surpassed the number of bridges we have, in the case of China), it's the perfect time to take stock of bridge address enumeration attacks and how well we can defend against them.

For background, bridge relays (aka bridges) are Tor relays that aren't listed in the main Tor directory. So even if an attacker blocks all the public relays, they still need to block all these "private" or "dark" relays too.

Here are ten classes of attacks to discover bridges, examples of them we've seen or worry about in practice, and some ideas for how to resolve or mitigate each issue. If you're looking for a research project, please grab one and start investigating!

#1: Overwhelm the public address distribution strategies.

China broke our https bridge distribution strategy in September 2009 by just pretending to be enough legitimate users from enough different subnets on the Internet. They broke the Gmail bridge distribution strategy in March 2010. These were easy to break because we don't have enough addresses relative to the size of the attacker (at this moment we're giving out 176 bridges by https and 201 bridges by Gmail, leaving us 165 to give out through other means like social networks), but it's not just a question of scale: we need better strategies that require attackers to do more or harder work than legitimate users.

#2: Run a non-guard non-exit relay and look for connections from non-relays.

Normal clients use guard nodes for the first hop of their circuits to protect them from long-term profiling attacks; but we chose to have bridge users use their bridge as a replacement for the guard hop, so we don't force them onto four-hop paths which would be less fun to use. As a result, if you run a relay that doesn't have the Guard flag, the only Tors who end up building circuits through you are relays (which you can identify from the public consensus) and bridges.

This attack has been floating around for a while, and is documented for example in Zhen Ling et al's Extensive Analysis and Large-Scale Empirical Evaluation of Tor Bridge Discovery paper.

The defense we plan is to make circuits through bridges use guards too. The naive way to do it would be for the client to choose a set of guards as her possible next hops after the bridge; but then each new client using the bridge increasingly exposures the bridge. The better approach is to make use of Tor's loose source routing feature to let the bridge itself choose the guards that all of the circuits it handles will use: that is, transparently layer an extra one-hop circuit inside the client's circuit. Those familiar with Babel's design will recognize this trick by the name "inter-mix detours".

Using a layer of guards after the bridge has two features: first, it completely removes the "bridges directly touch non-guards" issue, turning the attack from a deterministic one to a probabilistic one. Second, it reduces the exposure of the bridge to the rest of the network, since only a small set of relays will ever see a direct connection from it. The tradeoff, alas, is worse performance for bridge users. See proposal 188 for details.

[Edit: a friendly researcher pointed out to me that another solution here is to run the bridge as multi-homed, meaning the address that the relay sees isn't the address that the censor should block. That solution also helps resolve issues 3-5!]

#3: Run a guard relay and look for protocol differences.

Bridges are supposed to behave like relays with respect to the users using them, but like clients with respect to the relays they make connections to. Any slip-ups we introduce where the bridge acts like a relay with respect to the next hop are ways the next hop can distinguish it. Recent examples include "bridges fetched directory information like relays rather than like clients", "bridges didn't use a CREATE_FAST cell for the first hop of their own circuits like clients would have", "bridges didn't reject CREATE and CREATE_FAST cells on connections they had initiated like clients would have", and "bridges distinguish themselves in their NETINFO cell".

There's no way that's the end of them. We could sure use some help auditing the design and code for similar issues.

#4: Run a guard relay and do timing analysis.

Even if we fix issues #2 and #3, it may still be possible for a guard relay to look at the "clients" that are connecting to it, and figure out based on latency that some of the circuits from those clients look like they're two hops away rather than one hop away.

I bet there are active tricks to improve the attack accuracy. For example, the relay could watch round-trip latency from the circuit originator (seeing packets go towards Alice, and seeing how long until a packet shows up in response), and comparing that latency to what he sees when probing the previous hop with some cell that will get an immediate response rather than going all the way to Alice. Removing all the ways of probing round-trip latency to an adjacent Tor relay (either in-protocol or out-of-protocol) is a battle we're not going to win.

The research question remains though: how hard is this attack in practice? It's going to come down to statistics, which means it will be a game of driving up the false positives. It's hard to know how best to solve it until somebody does the engineering work for the attack.

If the attack turns out to work well (and I expect it will), the "bridges use guards" design will limit the damage from the attack.

#5: Run a relay and try connecting back to likely ports on each client that connects to you.

Many bridges listen for incoming client connections on port 443 or 9001. The adversary can run a relay and actively portscan each client that connects, to see which ones are running services that speak the Tor protocol. This attack was published by Eugene Vasserman in Membership-concealing overlay networks and by Jon McLachlan in On the risks of serving whenever you surf: Vulnerabilities in Tor's blocking resistance design, both in 2009.

The "bridges use guards" design partially resolves this attack as well, since we limit the exposure of the bridge to a small group of relays that probably could have done some other above attacks as well.

But it does not wholly resolve the concern: clients (and therefore also bridges) don't use their entry guards for directory fetches at present. So while the bridge won't build circuits through the relay it fetches directory information from, it will still reveal its existence. That's another reason to move forward with the "directory guard" design.

#6: Scan the Internet for services that talk the Tor protocol.

Even if we successfully hide the bridges behind guards, the adversary can still blindly scan for them and pretend to be a client. To make it more practical, he could focus on scanning likely networks, or near other bridges he's discovered. We called this topic "scanning resistance" in our original bridge design paper.

There's a particularly insidious combination of #5 and #6 if you're a government-scale adversary: watch your government firewall for SSL flows (since Tor tries to blend in with SSL traffic), and do active followup probing to every destination you see. Whitelist sites you've already checked if you want to trade efficiency for precision. This scale of attack requires some serious engineering work for a large country, but early indications are that China might be investigating exactly this approach.

The answer here is to give the bridge user some secret when she learns the bridge address, and require her to prove knowledge of that secret before the bridge will admit to knowing the Tor protocol. For example, we could imagine running an Apache SSL webserver with a pass-through module that tunnels your traffic to the Tor relay once she's presented the right password. Or Tor could handle that authentication itself. BridgeSPA: Improving Tor Bridges with Single Packet Authorization offers an SPA-style approach, with the drawbacks of requiring root on both sides and being OS-specific.

Another avenue to explore is putting some of the bridge addresses behind a service like Telex, Decoy Routing, or Cirripede. These designs let users privately tag a flow (e.g. an SSL handshake) in such a way that tagged flows are diverted to a Tor bridge while untagged flows continue as normal. So now we could deploy a vanilla Apache in one place and a vanilla Tor bridge in another, and not have to modify either of them. The Tor client bundle would need an extra piece of software though, and there are still some engineering and deployment details to be worked out.

#7: Break into the Tor Project infrastructure.

The bridge directory authority aggregates the list of bridges and periodically sends it to the bridgedb service so it can parcel addresses out by its various distribution strategies. Breaking into either of these services would give you the list of bridges.

We can imagine some design changes to make the risk less bad. For one, people can already run bridges that don't publish to the bridge directory authority (and then distribute their addresses themselves). Second, I had a nice chat with a Chinese NGO recently who wants to set up a bridge directory authority of their own, and distribute custom Vidalia bridge bundles to their members that are configured to publish their bridge addresses to this alternate bridge directory authority. A third option is to decentralize the bridge authority and bridgedb services, such that each component only learns about a fraction of the total bridge population — that design quickly gets messy though in terms of engineering and in terms of analyzing its security against various attacks.

#8: Just watch the bridge authority's reachability tests.

You don't actually need to break in to the bridge authority. Instead, you can just monitor its network connection: it will periodically test reachability of each bridge it knows, in order to let the bridgedb service know which addresses to give out.

We could do these reachability tests through Tor, so watching the bridge authority doesn't tell you anything about what it's testing. But that just shifts the risk onto the rest of the relays, such that an adversary who runs or monitors a sample of relays gets to learn about a corresponding sample of bridges.

One option is to decentralize the testing such that monitoring a single location doesn't give you the whole bridge list. But how exactly to distribute it, and onto what, is messy from both the operational and research angles. Another option would be for the bridges themselves to ramp up the frequency of their reachability tests (they currently self-test for reachability before publishing, to give quick feedback to their operator if they're misconfigured). Then the bridges can just anonymously publish an authenticated "still here" message once an hour, so (assuming they all tell the truth) the bridge authority never has to do any testing. But this self-testing also allows an enumeration attack, since we build a circuit to a random relay and then try to extend back to our bridge address! Maybe bridges should be asking their guards to do the self-testing — once they have guards, that is?

These questions are related to the question of learning whether a bridge has been blocked in a given country. More on that in a future blog post.

#9: Watch your firewall and DPI for Tor flows.

While the above attacks have to do with recognizing or inducing bridge-specific behavior, another class of attacks is just to buy some fancy Deep Packet Inspection gear and have it look for, say, characteristics of the SSL certificates or handshakes that make Tor flows stand out from "normal" SSL flows. Iran has used this strategy to block Tor twice, and it lets them block bridges for free. The attack is most effective if you have a large and diverse population of Tor users behind your firewall, since you'll only be able to learn about bridges that your users try to use.

We can fix the issue by making Tor's handshake more like a normal SSL handshake, but I wonder if that's really a battle we can ever win. The better answer is to encourage a proliferation of modular Tor transports, like obfsproxy, and get the rest of the research community interested in designing tool-neutral transports that blend in better.

#10: Zig-zag between bridges and users.

Start with a set of known bridge addresses. Watch your firewall to see who connects to those bridges. Then watch those users, and see what other addresses they connect to. Wash, rinse, repeat.

As above, this attack only works well when you have a large population of Tor users behind your firewall. It also requires some more engineering work to be able to trace source addresses in addition to destination addresses. But I'd be surprised if some major government firewalls don't have this capability already.

The solution here probably involves partitioning bridge addresses into cells, such that zig-zagging from users to bridges only gives you a bounded set of bridges (and a bounded set of users, for that matter). That approach will require some changes in our bridgedb design though. Currently when a user requests some bridge addresses, bridgedb maps the user's "address" (IP address, gmail account name, or whatever) into a point in the keyspace (using consistent hashing), and the answers are the k successors of that point in the ring (using DHT terminology).

Dan Boneh suggested an alternate approach where we do keyed hashes of the user's address and all the bridge fingerprints, and return all bridges whose hashed fingerprints match the user's hash in the first b bits. The result is that users would tend to get clustered by the bridges they know. That feature limits the damage from the zig-zag attack, but does it increase the risks in some distribution strategies? I already worry that bridge distribution strategies based on social networks will result in clusters of socially related users using the same bridges, meaning the attacker can reconstruct the social network. If we isolate socially related users in the same partition, do we magnify that problem? This approach also needs more research work to make it scale such that we can always return about k results, even as the address pool grows, and without reintroducing zig-zag vulnerabilities.

#11: ...What did I miss?

Tor 0.2.3.7-alpha is out

Tor 0.2.3.7-alpha fixes a crash bug in 0.2.3.6-alpha introduced by the new v3 handshake. It also resolves yet another bridge address enumeration issue.

All packages are updated, with the exception of the OS X PPC packages. The build machine is down and packages will be built as soon as it is back online.

https://www.torproject.org/download

Changes in version 0.2.3.7-alpha - 2011-10-30
Major bugfixes:

  • If we mark an OR connection for close based on a cell we process,
    don't process any further cells on it. We already avoid further
    reads on marked-for-close connections, but now we also discard the
    cells we'd already read. Fixes bug 4299; bugfix on 0.2.0.10-alpha,
    which was the first version where we might mark a connection for
    close based on processing a cell on it.
  • Fix a double-free bug that would occur when we received an invalid
    certificate in a CERT cell in the new v3 handshake. Fixes bug 4343;
    bugfix on 0.2.3.6-alpha.
  • Bridges no longer include their address in NETINFO cells on outgoing
    OR connections, to allow them to blend in better with clients.
    Removes another avenue for enumerating bridges. Reported by
    "troll_un". Fixes bug 4348; bugfix on 0.2.0.10-alpha, when NETINFO
    cells were introduced.

Trivial fixes:

  • Fixed a typo in a hibernation-related log message. Fixes bug 4331;
    bugfix on 0.2.2.23-alpha; found by "tmpname0901".

Torsocks 1.2 Released

I'm happy to announce the release of Torsocks 1.2.

Torsocks is an application for Linux, BSD and Mac OSX that allows you to use network applications such as ssh and irssi with Tor.

A quick guide to using Torsocks and an overview of its compatibility with a variety of popular applications is available at the project's home page.

The focus for this release was to add a test suite, clean up the source code, simplify the build, compile on more versions of BSD, and add a few new defences that make it harder for users to leak information that might compromise their anonymity.

Full details of the changes are available in the release change log.

If you don't want to wait until your distribution packages the latest version of torsocks, you can download the source from the project home page. If you find any problems with Torsocks please don't be shy about opening a bug.

Many thanks to Anthony Basile and the mysterious 'foobi..@gmail.com' for their substantial contribution to this release.

- Robert

Tor 0.2.2.34 is released (security patches)

Tor 0.2.2.34 fixes a critical anonymity vulnerability where an attacker
can deanonymize Tor users. Everybody should upgrade.

The attack relies on four components:

  • 1) Clients reuse their TLS cert when talking to different relays, so relays can recognize a user by the identity key in her cert.
  • 2) An attacker who knows the client's identity key can probe each guard relay to see if that identity key is connected to that guard relay right now.
  • 3) A variety of active attacks in the literature (starting from "Low-Cost Traffic Analysis of Tor" by Murdoch and Danezis in 2005) allow a malicious website to discover the guard relays that a Tor user visiting the website is using.
  • 4) Clients typically pick three guards at random, so the set of guards for a given user could well be a unique fingerprint for her. This release fixes components #1 and #2, which is enough to block the attack; the other two remain as open research problems.

Special thanks to "frosty_un" for reporting the issue to us! (As far as we know, this has nothing to do with any claimed attack currently getting attention in the media.)

Clients should upgrade so they are no longer recognizable by the TLS certs they present. Relays should upgrade so they no longer allow a remote attacker to probe them to test whether unpatched clients are currently connected to them.

This release also fixes several vulnerabilities that allow an attacker to enumerate bridge relays. Some bridge enumeration attacks still remain; see for example proposal 188.

https://torproject.org/download/download-easy

Changes in version 0.2.2.34 - 2011-10-26

Privacy/anonymity fixes (clients):

  • Clients and bridges no longer send TLS certificate chains on outgoing OR
    connections. Previously, each client or bridge would use the same cert chain
    for all outgoing OR connections until its IP address changes, which allowed any
    relay that the client or bridge contacted to determine which entry guards it is
    using. Fixes CVE-2011-2768. Bugfix on 0.0.9pre5; found by "frosty_un".
  • If a relay receives a CREATE_FAST cell on a TLS connection, it no longer
    considers that connection as suitable for satisfying a circuit EXTEND request.
    Now relays can protect clients from the CVE-2011-2768 issue even if the clients
    haven't upgraded yet.
  • Directory authorities no longer assign the Guard flag to relays that
    haven't upgraded to the above "refuse EXTEND requests to client connections"
    fix. Now directory authorities can protect clients from the CVE-2011-2768 issue
    even if neither the clients nor the relays have upgraded yet. There's a new
    "GiveGuardFlagTo_CVE_2011_2768_VulnerableRelays" config option to let us
    transition smoothly, else tomorrow there would be no guard relays.

Privacy/anonymity fixes (bridge enumeration):

  • Bridge relays now do their directory fetches inside Tor TLS connections,
    like all the other clients do, rather than connecting directly to the DirPort
    like public relays do. Removes another avenue for enumerating bridges. Fixes
    bug 4115; bugfix on 0.2.0.35.
  • Bridges relays now build circuits for themselves in a more similar way to
    how clients build them. Removes another avenue for enumerating bridges. Fixes
    bug 4124; bugfix on 0.2.0.3-alpha, when bridges were introduced.
  • Bridges now refuse CREATE or CREATE_FAST cells on OR connections that they
    initiated. Relays could distinguish incoming bridge connections from client
    connections, creating another avenue for enumerating bridges. Fixes
    CVE-2011-2769. Bugfix on 0.2.0.3-alpha. Found by "frosty_un".

Major bugfixes:

  • Fix a crash bug when changing node restrictions while a DNS lookup is
    in-progress. Fixes bug 4259; bugfix on 0.2.2.25-alpha. Bugfix by "Tey'".
  • Don't launch a useless circuit after failing to use one of a hidden
    service's introduction points. Previously, we would launch a new introduction
    circuit, but not set the hidden service which that circuit was intended to
    connect to, so it would never actually be used. A different piece of code would
    then create a new introduction circuit correctly. Bug reported by katmagic and
    found by Sebastian Hahn. Bugfix on 0.2.1.13-alpha; fixes bug 4212.

Minor bugfixes:

  • Change an integer overflow check in the OpenBSD_Malloc code so that GCC is
    less likely to eliminate it as impossible. Patch from Mansour Moufid. Fixes bug
    4059.
  • When a hidden service turns an extra service-side introduction circuit into
    a general-purpose circuit, free the rend_data and intro_key fields first, so we
    won't leak memory if the circuit is cannibalized for use as another
    service-side introduction circuit. Bugfix on 0.2.1.7-alpha; fixes bug
    4251.
  • Bridges now skip DNS self-tests, to act a little more stealthily. Fixes
    bug 4201; bugfix on 0.2.0.3-alpha, which first introduced bridges. Patch by
    "warms0x".
  • Fix internal bug-checking logic that was supposed to catch failures in
    digest generation so that it will fail more robustly if we ask for a
    nonexistent algorithm. Found by Coverity Scan. Bugfix on 0.2.2.1-alpha; fixes
    Coverity CID 479.
  • Report any failure in init_keys() calls launched because our IP address has
    changed. Spotted by Coverity Scan. Bugfix on 0.1.1.4-alpha; fixes CID 484.

Minor bugfixes (log messages and documentation):

  • Remove a confusing dollar sign from the example fingerprint in the man
    page, and also make the example fingerprint a valid one. Fixes bug 4309; bugfix
    on 0.2.1.3-alpha.
  • The next version of Windows will be called Windows 8, and it has a major
    version of 6, minor version of 2. Correctly identify that version instead of
    calling it "Very recent version". Resolves ticket 4153; reported by
    funkstar.
  • Downgrade log messages about circuit timeout calibration from "notice" to
    "info": they don't require or suggest any human intervention. Patch from Tom
    Lowenthal. Fixes bug 4063; bugfix on 0.2.2.14-alpha.

Minor features:

  • Turn on directory request statistics by default and include them in
    extra-info descriptors. Don't break if we have no GeoIP database. Backported
    from 0.2.3.1-alpha; implements ticket 3951.
  • Update to the October 4 2011 Maxmind GeoLite Country database.

Rumors of Tor's compromise are greatly exaggerated

There are two recent stories claiming the Tor network is compromised. It seems it is easier to get press than to publish research, work with us on the details, and propose solutions. Our comments here are based upon the same stories you are reading. We have no insider information.

The first story has been around 'Freedom Hosting' and their hosting of child abuse materials as exposed by Anonymous Operation Darknet. We're reading the press articles, pastebin urls, and talking to the same people as you. It appears 'Anonymous' cracked the Apache/PHP/MySQL setup at Freedom Hosting and published some, or all, of their users in the database. These sites happened to be hosted on a Tor hidden service. Further, 'Anonymous' used a somewhat recent RAM-exhaustion denial of service attack on the 'Freedom Hosting' Apache server. It's a simple resource starvation attack that can be conducted over low bandwidth, low resource requirement connections to individual hosts. This isn't an attack on Tor, but rather an attack on some software behind a Tor hidden service. This attack was discussed in a thread on the tor-talk mailing list starting October 19th.

The second story is around Eric Filiol's claims of compromising the Tor network leading up to his Hackers to Hackers talk in Brazil in a few days. This claim was initially announced by some French websites; however, it has spread further, such as this Hacker News story.

Again, the tor-talk mailing list had the first discussions of these attacks back on October 13th. To be clear, neither Eric nor his researchers have disclosed anything about this attack to us. They have not talked to us, nor shared any data with us — despite some mail exchanges where we reminded him about the phrase "responsible disclosure".

Here's the attack as we understand it, from reading the various press reports:

They enumerated 6000 IP addresses that they think are Tor relays. There aren't that many Tor relays in the world — 2500 is a more accurate number. We're not sure what caused them to overcount so much. Perhaps they watched the Tor network over a matter of weeks and collected a bunch of addresses that aren't relays anymore? The set of relays is public information, so there's no reason to collect your own list and certainly no reason to end up with a wrong list.

One-third of the machines on those IP addresses are vulnerable to operating system or other system level attacks, meaning he can break in. That's quite a few! We wonder if that's true with the real Tor network, or just their simulated one? Even ignoring the question of what these 3500 extra IP addresses are, it's important to remember that one-third by number is not at all the same as one-third by capacity: Tor clients load-balance over relays based on the relay capacity, so any useful statement should be about how much of the capacity of the Tor network is vulnerable. It would indeed be shocking if one-third of the Tor network by capacity is vulnerable to external attacks.

(There's also an aside about enumerating bridges. They say they found 181 bridges, and then there's a quote saying they "now have a complete picture of the topography of Tor", which is a particularly unfortunate time for that quote since there are currently around 600 bridges running.)

We expect the talk will include discussion about some cool Windows trick that can modify the crypto keys in a running Tor relay that you have local system access to; but it's simpler and smarter just to say that when the attacker has local system access to a Tor relay, the attacker controls the relay.

Once they've broken into some relays, they do congestion attacks like packet spinning to congest the relays they couldn't compromise, to drive users toward the relays they own. It's unclear how many resources are needed to keep the rest of the relays continuously occupied long enough to keep the user from using them. There are probably some better heuristics that clients can use to distinguish between a loaded relay and an unavailable relay; we look forward to learning how well their attack here actually worked.

From there, the attack gets vague. The only hint we have is this nonsense sentence from the article:

The remaining flow can then be decrypted via a fully method of attack called "to clear unknown" based on statistical analysis.

Do they have a new attack on AES, or on OpenSSL's implementation of it, or on our use of OpenSSL? Or are they instead doing some sort of timing attack, where if you own the client's first hop and also the destination you can use statistics to confirm that the two flows are on the same circuit? There's a history of confused researchers proclaiming some sort of novel active attack when passive correlation attacks are much simpler and just as effective.

So the summary of the attack might be "take control of the nodes you can, then congest the other ones so your targets avoid them and use the nodes you control. Then do some unspecified magic crypto attack to defeat the layers of encryption for later hops in the circuit." But really, these are just guesses based on the same news articles you're reading. We look forwarding to finding out if there's actually an attack we can fix, or if they are just playing all the journalists to get attention.

More generally, there are two broader lessons to remember here. First, research into anonymity-breaking attacks is how the field moves forward, and using Tor for your target is common because a) it's resistant to all the simpler attacks and b) we make it really easy to do your research on. And second, remember that most other anonymity systems out there fall to these attacks so quickly and thoroughly that no researchers even talk about it anymore. For some recent examples, see the single-hop proxy discussions in How Much Anonymity does Network Latency Leak? and Website Fingerprinting in Onion Routing Based Anonymization Networks.

I thank Roger, Nick, and Runa for helping with this post.

Plain Vidalia Bundles to be Discontinued (Don't Panic!)

Over the past few years, Tor has gotten more popular and has had to grow and change to accommodate a highly varied userbase. One aspect of this is getting the software into users' hands and having it immediately do what they want it to, while also not allowing them to inadvertently deanonymize themselves because they missed a configuration step or didn't understand which applications were using Tor and which were not. As a result, we have standardized on the Tor Browser Bundle for all platforms and are currently promoting it as our only fully supported client experience.

Since the Tor Browser Bundle offers the best current protection, we are moving to a client/server model for packages, and consequently the "plain" Vidalia bundles will be discontinued by the end of the year and no longer recommended for client usage. We've started rolling out server Vidalia bundles for Windows, which you can test by going to the download page.

There are currently (and will continue to be) three types of server bundles available:

  • Bridge-by-default Vidalia bundle
  • This configures Tor to act as a bridge by default, so as soon as you install it and run it, you will be helping censored users reach the Tor network. You can read more about bridges here. This bundle still includes Torbutton and Polipo, but those will be removed in the next release (date to be determined).

  • Relay-by-default Vidalia bundle
  • This configures Tor to run as a non-exit relay by default. This means you will serve as either a guard or middle node and help grow the size of the Tor network. You can read more about Tor relay configuration here.

  • Exit-by-default Vidalia bundle
  • This configures Tor to run as an exit relay by default. Exit nodes are special, as they allow traffic to exit from the Tor network to the plain internet, and anyone who has not already looked into the risks associated with running an exit relay should read our tips for running an exit node with minimal harassment

We've started creating a Tor Browser Bundle FAQ, but we'd like to hear your concerns so we can provide answers where necessary, documentation for alternative setups, and fix the software where answers are insufficient. We have several months before Tor Browser Bundle is the only option, so please help us make it as good as possible! If you have bugs to file, please don't file them in the blog comments -- use our bug tracker for that.

Trip report, Arab Bloggers Meeting, Oct 3-7

Jake, Arturo, and I went to Tunisia Oct 3-7 to teach a bunch of bloggers from Arab countries about Tor and more generally about Internet security and privacy. The previous meetings were in Lebanon; it's amazing to reflect that the world has changed enough that Sami can hold it in his home country now.

The conference was one day of keynotes with lots of press attention, and then three days of unconference-style workshops.

On the keynote day, Jake and Arturo did a talk on mobile privacy, pointing out the wide variety of ways that the telephone network is "the best surveillance tool ever invented". The highlight for the day was when Moez Chakchouk, the head of the Tunisian Internet Agency (ATI), did a talk explicitly stating that Tunisia had been using Smartfilter since 2002, that Smartfilter had been giving Tunisia discounts in exchange for beta-testing their products for other countries in the region like Saudi Arabia, and that it was time for Tunisia to stop wasting money on expensive filters that aren't good for the country anyway.

We did a four-hour Tor training on the first workshop day. We covered what to look for in a circumvention or privacy tool (open source good, open design good, open analysis of security properties good, centralization bad). All the attendees left with a working Tor Browser Bundle install (well, all the attendees except the fellow with the ipad). We got many of them to install Pidgin and OTR as well, but ran into some demo bugs around the Jabber connect server config that derailed some users. I look forward to having the Tor IM Browser Bundle back in action now that we've fixed some Pidgin security bugs.

We did a three-hour general security and privacy Q&A on the second workshop day, covering topics like whether Skype is safe, how else can they do VoIP, how can they trust various software, a demo of what sniffing the network can show, iphone vs android vs blackberry, etc. It ended with a walk-through of how *we* keep our laptops secure, so people could see how far down the rabbit hole they can go.

Syria and Israel seem to be the scariest adversaries in the area right now, in terms of oppression technology and willingness to use it. Or said another way, if you live in Syria or Palestine, you are especially screwed. We heard some really sad and disturbing stories; but those stories aren't mine to tell here.

We helped to explain the implications of the 54 gigs of Bluecoat logs that got published from inside Syria, detailing URLs and the IP addresses that fetched them. (The IP addresses were scrubbed from the published version of the logs, but the URLs, user agents, timestamps, etc still contain quite sensitive info.)
http://advocacy.globalvoicesonline.org/2011/10/10/bluecoat-us-technology...

Perhaps most interesting in the Bluecoat logs is the evidence of Bluecoat devices phoning home to get updates. So much for Bluecoat's claims that they don't provide support to Syria. If the US government chose to enforce its existing laws against American companies selling surveillance tools to Syria, it would be a great step toward making Tor users safer in Syria right now: no doubt Syria has some smart people who can configure things locally, but it's way worse when Silicon Valley engineers provide new filter rules to detect protocols like Tor for no additional charge.

The pervasiveness of video cameras and journalists at the meeting was surprising. I'm told the previous Arab blogger meeting was just a bunch of geeks sitting around their laptops talking about how to improve their countries and societies. Now that the Twitter Revolution is hot in the press, I guess all the Western media now want a piece of the action.

On the third workshop day we learned that there was a surveillance corporate expo happening in the same hotel as the blogger meeting. We crashed it and collected some brochures. We also found a pair of students from a nearby university who had set up a booth to single-handedly try to offset the evil of the expo. They were part of a security student group at their university that had made a magazine that talked among other things about Tor, Tunisian filtering, etc. We gave them a big pile of Tor stickers.

On our extra day after the workshops, we visited Moez at his Internet Agency and interviewed him for a few hours about the state of filtering in his country. He confirmed that they renewed their Smartfilter license until Sept 2012, and that they still filter "the groups that want it" (government and schools), but for technical reasons they have turned off the global filters (they broke and nobody has fixed them). We pointed out that since an external company operates their filters — including for their military — then that company not only has freedom to censor anything they want, but they also get to see every single request when deciding whether to censor it. Moez used the phrase "national sovereignty" when explaining why it isn't a great idea for Tunisia to outsource their filtering. Great point: it would be foolish to imagine that this external company isn't logging things for their own purposes, whether that's "improving their product" or something more sinister. As we keep seeing, collecting a large data set and then hoping to keep it secret never seems to work out.

One of the points Jake kept hammering on throughout the week was "if *anything* is being filtered, then you have to realize that they're surveilling *everything* in order to make those filtering decisions." The Syrian logs help to drive the point home but it seems like a lot of people haven't really internalized it yet. We still find people thinking of Tor solely as an "anti-filter" tool and not considering the surveillance angle.

After the meeting with Moez, we went to visit one of the universities. We talked to a few dozen students who were really excited to find us there — to the point that they quickly located a video camera and interviewed us on the spot. They brought us to their security class, and informed the professor that we would be speaking for the first half hour of it. We gave an impassioned plea for them to learn more about Tor and teach other people in their country how to be safe online. I think the group of students there could be really valuable for creating local technical Tor resources. As a bonus, the traditional path for a computer science graduate of this university is to go work at Tunisia Telecom, the monopoly telco that hosts the filtering boxes &mdash the more we can influence the incoming generations, the more the change will grow.

New Tor Browser Bundles

The Tor Browser Bundles have been updated to Vidalia 0.2.15. The OS X and Linux bundles have Torbutton 1.4.4, but a hotfix for Windows was released with 1.4.4.1 because 1.4.4 had a minor issue that prevented the Windows bundle from going to https://check.torproject.org.

https://www.torproject.org/download

Tor Browser Bundle (2.2.33-3)

  • Update Vidalia to 0.2.15
  • Update Torbutton to 1.4.4.1
  • Update NoScript to 2.1.4
  • Remove trailing dash from Windows version number (closes: #4160)
  • Make Tor Browser (Aurora) fail closed when not launched with a TBB profile
    (closes: #4192)
Syndicate content