Archive - May 2011

  • All
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

New Tor Browser Bundles

All of the alpha Tor Browser Bundles have been updated to the latest Tor

Firefox 3.6 Tor Browser Bundles

Linux bundles
1.1.9: Released 2011-05-19

  • Update Tor to
  • Update NoScript to
  • Update BetterPrivacy to 1.50
  • Update HTTPS Everywhere to 0.9.9.development.5

OS X bundle
1.0.17: Released 2011-05-19

  • Update Tor to
  • Update NoScript to
  • Update HTTPS-Everywhere to 0.9.9.development.5
  • Update BetterPrivacy to 1.50

Firefox 4 Tor Browser Bundles

Tor Browser Bundle (2.2.27-1)

  • Update Tor to
  • Update HTTPS Everywhere to 0.9.9.development.5
  • Update NoScript to

Temporary direct download links for Firefox 4 bundles:

Tor and are out

Changes in version - 2011-05-18
Tor fixes a bridge-related stability bug in the previous
release, and also adds a few more general bugfixes.

Major bugfixes:

  • Fix a crash bug when changing bridges in a running Tor process.
    Fixes bug 3213; bugfix on

  • When the controller configures a new bridge, don't wait 10 to 60
    seconds before trying to fetch its descriptor. Bugfix on; fixes bug 3198 (suggested by 2355).

    Minor bugfixes:

  • Require that onion keys have exponent 65537 in microdescriptors too.
    Fixes more of bug 3207; bugfix on

  • Tor used to limit HttpProxyAuthenticator values to 48 characters.
    Changed the limit to 512 characters by removing base64 newlines.
    Fixes bug 2752. Fix by Michael Yakubovich.

  • When a client starts or stops using bridges, never use a circuit
    that was built before the configuration change. This behavior could
    put at risk a user who uses bridges to ensure that her traffic
    only goes to the chosen addresses. Bugfix on; fixes
    bug 3200.

    Changes in version - 2011-05-17
    Tor fixes a variety of potential privacy problems. It
    also introduces a new "socksport auto" approach that should make it
    easier to run multiple Tors on the same system, and does a lot of
    cleanup to get us closer to a release candidate.

    Security/privacy fixes:

    • Replace all potentially sensitive memory comparison operations
      with versions whose runtime does not depend on the data being
      compared. This will help resist a class of attacks where an
      adversary can use variations in timing information to learn
      sensitive data. Fix for one case of bug 3122. (Safe memcmp
      implementation by Robert Ransom based partially on code by DJB.)
    • When receiving a hidden service descriptor, check that it is for
      the hidden service we wanted. Previously, Tor would store any
      hidden service descriptors that a directory gave it, whether it
      wanted them or not. This wouldn't have let an attacker impersonate
      a hidden service, but it did let directories pre-seed a client
      with descriptors that it didn't want. Bugfix on 0.0.6.
    • On SIGHUP, do not clear out all TrackHostExits mappings, client
      DNS cache entries, and virtual address mappings: that's what
      NEWNYM is for. Fixes bug 1345; bugfix on

    Major features:

    • The options SocksPort, ControlPort, and so on now all accept a
      value "auto" that opens a socket on an OS-selected port. A
      new ControlPortWriteToFile option tells Tor to write its
      actual control port or ports to a chosen file. If the option
      ControlPortFileGroupReadable is set, the file is created as
      group-readable. Now users can run two Tor clients on the same
      system without needing to manually mess with parameters. Resolves
      part of ticket 3076.
    • Set SO_REUSEADDR on all sockets, not just listeners. This should
      help busy exit nodes avoid running out of useable ports just
      because all the ports have been used in the near past. Resolves
      issue 2850.

    Minor features:

    • New "GETINFO net/listeners/(type)" controller command to return
      a list of addresses and ports that are bound for listeners for a
      given connection type. This is useful when the user has configured
      "SocksPort auto" and the controller needs to know which port got
      chosen. Resolves another part of ticket 3076.
    • Add a new ControlSocketsGroupWritable configuration option: when
      it is turned on, ControlSockets are group-writeable by the default
      group of the current user. Patch by Jérémy Bobbio; implements
      ticket 2972.
    • Tor now refuses to create a ControlSocket in a directory that is
      world-readable (or group-readable if ControlSocketsGroupWritable
      is 0). This is necessary because some operating systems do not
      enforce permissions on an AF_UNIX sockets. Permissions on the
      directory holding the socket, however, seems to work everywhere.
    • Rate-limit a warning about failures to download v2 networkstatus
      documents. Resolves part of bug 1352.
    • Backport code from 0.2.3.x that allows directory authorities to
      clean their microdescriptor caches. Needed to resolve bug 2230.
    • When an HTTPS proxy reports "403 Forbidden", we now explain
      what it means rather than calling it an unexpected status code.
      Closes bug 2503. Patch from Michael Yakubovich.
    • Update to the May 1 2011 Maxmind GeoLite Country database.

    Minor bugfixes:

    • Authorities now clean their microdesc cache periodically and when
      reading from disk initially, not only when adding new descriptors.
      This prevents a bug where we could lose microdescriptors. Bugfix
      on 2230
    • Do not crash when our configuration file becomes unreadable, for
      example due to a permissions change, between when we start up
      and when a controller calls SAVECONF. Fixes bug 3135; bugfix
      on 0.0.9pre6.
    • Avoid a bug that would keep us from replacing a microdescriptor
      cache on Windows. (We would try to replace the file while still
      holding it open. That's fine on Unix, but Windows doesn't let us
      do that.) Bugfix on; bug found by wanoskarnet.
    • Add missing explanations for the authority-related torrc options
      RephistTrackTime, BridgePassword, and V3AuthUseLegacyKey in the
      man page. Resolves issue 2379.
    • As an authority, do not upload our own vote or signature set to
      ourself. It would tell us nothing new, and as of,
      it would get flagged as a duplicate. Resolves bug 3026.
    • Accept hidden service descriptors if we think we might be a hidden
      service directory, regardless of what our consensus says. This
      helps robustness, since clients and hidden services can sometimes
      have a more up-to-date view of the network consensus than we do,
      and if they think that the directory authorities list us a HSDir,
      we might actually be one. Related to bug 2732; bugfix on
    • When a controller changes TrackHostExits, remove mappings for
      hosts that should no longer have their exits tracked. Bugfix on
    • When a controller changes VirtualAddrNetwork, remove any mappings
      for hosts that were automapped to the old network. Bugfix on
    • When a controller changes one of the AutomapHosts* options, remove
      any mappings for hosts that should no longer be automapped. Bugfix
    • Do not reset the bridge descriptor download status every time we
      re-parse our configuration or get a configuration change. Fixes
      bug 3019; bugfix on

    Minor bugfixes (code cleanup):

    • When loading the microdesc journal, remember its current size.
      In 0.2.2, this helps prevent the microdesc journal from growing
      without limit on authorities (who are the only ones to use it in
      0.2.2). Fixes a part of bug 2230; bugfix on
      Fix posted by "cypherpunks."
    • The microdesc journal is supposed to get rebuilt only if it is
      at least _half_ the length of the store, not _twice_ the length
      of the store. Bugfix on; fixes part of bug 2230.
    • Fix a potential null-pointer dereference while computing a
      consensus. Bugfix on tor-, found with the help of
      clang's analyzer.
    • Avoid a possible null-pointer dereference when rebuilding the mdesc
      cache without actually having any descriptors to cache. Bugfix on Issue discovered using clang's static analyzer.
    • If we fail to compute the identity digest of a v3 legacy keypair,
      warn, and don't use a buffer-full of junk instead. Bugfix on; fixes bug 3106.
    • Resolve an untriggerable issue in smartlist_string_num_isin(),
      where if the function had ever in the future been used to check
      for the presence of a too-large number, it would have given an
      incorrect result. (Fortunately, we only used it for 16-bit
      values.) Fixes bug 3175; bugfix on
    • Require that introduction point keys and onion handshake keys
      have a public exponent of 65537. Starts to fix bug 3207; bugfix

    Removed features:

    • Caches no longer download and serve v2 networkstatus documents
      unless FetchV2Networkstatus flag is set: these documents haven't
      haven't been used by clients or relays since 0.2.0.x. Resolves
      bug 3022.
  • A visit to Iceland

    I spent two days in Iceland discussing Tor and freedom of information with various people. I talked to a few people, including a member of Icelandic Parliament, about the International Modern Media Institute, The goals of IMMI are to secure free speech and defining new operating principles for the global media. They are starting with Iceland and moving on to the world. They already have much success in Iceland, but are running into issues of scale and funding. They could use some help.

    The second day I talked to the computer forensics team from the National Police of Iceland about Tor, We discussed all things Tor and their experiences with it. Apparently there are 'computer specialists' traveling Europe talking to law enforcement (for great profit) disparaging any technology that provides security and privacy to citizens as 'for child abuse and organized crime'. These people neglect to mention that all technologies are dual usage and the human behind it determines the good or bad usage of the technology. One of the officers mentions that no one talks about crowbar crime, but everyone is talking about computer crime as if humans aren't involved. Overall, it was a great discussion lasting a few hours.

    I then head over to work with the people from 1984, They are one of the largest hosting providers in Iceland. And thanks to them, we now have hosted in the country. I learned more about the physical infrastructure of the Internet in Iceland. We discussed ways to increase competition now that the Iceland Govt bailed out the company that owns nearly all of the fiber in the country. Imagine a country with fiber everywhere (already true in Iceland) and treating it like the road infrastructure with any provider getting access to it. Now mix in successful freedom of expression laws from IMMI.

    That night I talked about Tor to the only hackerspace in Iceland at their beer and crypto night at Hakkavélin, Someone showed up and recorded my entire talk until their battery ran out. kapteinnkrokur posted the video at I covered Tor topics, life under surveillance, and some more advanced topics relating to bridges, ssl filtering, and attempted DHT directory info over Tor. Afterwards, many went out to a bar to talk more until 2 AM. I had a great chat with Bjarni and Ewelina from PageKite,, about Tor marketing, supporting privacy enhancing technology, and peer to peer collaboration for all.

    Iceland is a fantastic country and the people are great. I hope to spend more time there, as soon as the volcanoes stop disrupting flights.

    Thank you to Björgvin, Birgitta, Berglind, and Mörður for arranging meetings and hosting me for the two days.

    2011 Stockholm Hackfest Thanks

    Thanks to everyone for attending today. We had some great discussions about The Haven Project, Economic Association for tor relay operators, telecomix, pluggable transports, TAILS, IPv6, sandboxing flash, and of course, tor itself.

    And a great bit of gratitude to for promoting and hosting the hackfest, and for providing food. and the followup,


    Strategies for getting more bridge addresses

    We need more bridges. When I first envisioned the bridge address arms race, I said "We need to get lots of bridges, so the adversary can't learn them all." That was the wrong statement to make. In retrospect, the correct statement was: "We need the rate of new bridge addresses to exceed the rate that the adversary can block them."

    For background, bridge relays (aka bridges) are Tor relays that aren't listed in the main Tor directory. So even if an attacker blocks all the public relays, they still need to block all these "private" or "dark" relays too. We deployed them several years ago in anticipation of the upcoming arms race, and they worked great in their first showing in 2009. But since then, China has learned and blocked most of the bridges we give out through public (https and gmail) distribution channels.

    One piece of the puzzle is smarter bridge distribution mechanisms (plus see this post for more thoughts) — right now we're getting 8000 mails a day from gmail asking for bridges from a pool of less than a thousand. The distribution strategy that works best right now is ad hoc distribution via social connections. But even if we come up with brilliant new distribution mechanisms, we simply need more addresses to work with. How can we get them? Here are five strategies.

    Approach one: Make it easy to become a bridge using the Vidalia interface. This approach was our first try at getting more bridges: click on "Sharing", then "Help censored users reach the Tor network". Easy to do, and lots of people have done it. But lots here is thousands, not hundreds of thousands. Nobody knows that they should click it or why.

    Approach two: Bridge-by-default bundles. People who want to help out can now simply download and run our bridge-by-default bundle, and poof they're a bridge. There's a lot of flexibility here. For example, we could provide a customized bridge-by-default bundle for a Chinese human rights NGO that publishes your bridge address directly to them; then they give out the bridge addresses from their volunteers through their own social network. I think this strategy will be most effective when combined with targeted advocacy, that is, after a given community is convinced that they want to help and want to know how they can best help.

    Approach three: Fast, stable, reachable Tor clients auto-promote themselves. Tor clients can monitor their own stability, performance, and reachability, and the best clients can opt to become bridges automatically. We can tune the thresholds ("how fast, how stable") in the directory consensus, to tailor how many clients promote themselves in response to demand. Read the proposal for more details. In theory this approach could provide us with many tens of thousands of bridges in a wide array of locations — and we're drawing from a pool of people who already have other reasons to download Tor. Downsides include a) there's quite a bit of coding work remaining before we can launch it, b) there are certainly situations where we shouldn't turn a Tor user into a bridge, which means we need to sort out some smart way to interact with the user and get permission, and c) these users don't actually change addresses as often as we might want, so we're still in the "gets lots of bridges" mindset rather than the "increase the rate of new bridge addresses" mindset.

    Approach four: Redirect entire address blocks into the Tor network. There's no reason the bridge and its address need to run in the same location, and it's really addresses that are the critical resource here. If we get our friends at various ISPs to redirect some spare /16 networks our way, we'd have a lot more addresses to play with, and more importantly, we can control the churn of these addresses. Past experience with some censors shows that they work hard to unblock addresses that are no longer acting as proxies. If we light up only a tiny fraction of the IP space at a time, how long until they block all of it? How much does the presence of other services on the address block make them hesitate? I want to find out. The end game here is for Comcast to give us a few random IP addresses from each of their /24 networks. All the code on the Tor bridge side already works here, so the next steps are a) figure out how to teach an ISP's router to redirect some of its address space to us, and then b) sign up some people who have a good social network of users who need bridges, and get them to play that arms race more actively.

    Approach five: More generic proxies. Originally I had thought that the extra layer of encryption and authentication from a bridge was a crucial piece of keeping the user (call her Alice) safe. But I'm increasingly thinking that the security properties she gets from a Tor circuit (anonymity/privacy) can be separated from the security properties she gets from the bridge (reachability, and optionally obfuscation). That is, as long as Alice can fetch the signed Tor network consensus and verify the keys of each of the public relays in her path, it doesn't matter so much that the bridge gets to see her traffic. Attacks by the bridge are no more effective than attacks by a local network adversary, which by design is not much. Now, this conclusion is premature — adding a bridge into the picture means there's a new observation point in addition to the local network adversary, and observation points are exactly what the attacker needs to correlate traffic flows and break Tor's anonymity, But on the flip side, right now bridges already get to act as these observational points, and the extra layer of encryption they add doesn't seem to help Alice any. So it's too early to say that a socks or https proxy is just as safe as a bridge (assuming you use a full Tor circuit in either case), but I'm optimistic that these more generic proxies have a role to play.

    If we go this route, then rather than needing volunteers to run a whole Tor (which is especially cumbersome because it needs libraries like OpenSSL), people could run socks proxies on a much broader set of platforms. For example, they should be easy to add into Orbot (our Tor package for Android) or into Seattle (an overlay network by UW researchers that restricts applications to a safe subset of Python). We could even imagine setting up a website where volunteers visit a given page and it runs a Flash or Java applet socks proxy, lending their address to the bridge pool while their browser is open. There are some gotchas to work through, such as a) needing to sign the applets so they have the required network permissions, b) figuring out how to get around the fact that it seems hard to allow connections from the Internet to a flash plugin, and c) needing to program the socks proxy with a Tor bridge or relay address so the user doesn't have to ask for it (after all, socks handshakes are unencrypted and it wouldn't do to let the adversary watch Alice ask for an IP address that's known to be associated with Tor). This 'flash proxy' idea was developed in collaboration with Dan Boneh at Stanford, and they are currently designing and building it — stay tuned.

    Code Commit Movie of our Website

    A visual history of our website, as seen through svn code commits, from the beginning.
    The full movie is at or after the jump.

    April 2011 Progress Report

    The April 2011 Progress Report is attached to this post and available at

    Highlights include releases for tor, vidalia, arm, and libevent. Updates to pluggable transports, obfsproxy, torbutton, many translation and core architecture updates.

    Reading links, 7 May edition

    Just some quick links to what interests us over the past week.

    Syndicate content