Tor and are out

Changes in version - 2011-06-20
Tor reverts an accidental behavior change for users who
have bridge lines in their torrc but don't want to use them; gets
us closer to having the control socket feature working on Debian;
and fixes a variety of smaller bugs.

Major bugfixes:

  • Revert the UseBridges option to its behavior before
    When we changed the default behavior to "use bridges if any
    are listed in the torrc", we surprised users who had bridges
    in their torrc files but who didn't actually want to use them.
    Partial resolution for bug 3354.

Privacy fixes:

  • Don't attach new streams to old rendezvous circuits after SIGNAL
    NEWNYM. Previously, we would keep using an existing rendezvous
    circuit if it remained open (i.e. if it were kept open by a
    long-lived stream, or if a new stream were attached to it before
    Tor could notice that it was old and no longer in use). Bugfix on; fixes bug 3375.

Minor bugfixes:

  • Fix a bug when using ControlSocketsGroupWritable with User. The
    directory's group would be checked against the current group, not
    the configured group. Patch by Jérémy Bobbio. Fixes bug 3393;
    bugfix on
  • Make connection_printf_to_buf()'s behaviour sane. Its callers
    expect it to emit a CRLF iff the format string ends with CRLF;
    it actually emitted a CRLF iff (a) the format string ended with
    CRLF or (b) the resulting string was over 1023 characters long or
    (c) the format string did not end with CRLF *and* the resulting
    string was 1021 characters long or longer. Bugfix on;
    fixes part of bug 3407.
  • Make send_control_event_impl()'s behaviour sane. Its callers
    expect it to always emit a CRLF at the end of the string; it
    might have emitted extra control characters as well. Bugfix on; fixes another part of bug 3407.
  • Make crypto_rand_int() check the value of its input correctly.
    Previously, it accepted values up to UINT_MAX, but could return a
    negative number if given a value above INT_MAX+1. Found by George
    Kadianakis. Fixes bug 3306; bugfix on 0.2.2pre14.
  • Avoid a segfault when reading a malformed circuit build state
    with more than INT_MAX entries. Found by wanoskarnet. Bugfix on
  • When asked about a DNS record type we don't support via a
    client DNSPort, reply with NOTIMPL rather than an empty
    reply. Patch by intrigeri. Fixes bug 3369; bugfix on 2.0.1-alpha.
  • Fix a rare memory leak during stats writing. Found by coverity.

Minor features:

  • Update to the June 1 2011 Maxmind GeoLite Country database.

Code simplifications and refactoring:

  • Remove some dead code as indicated by coverity.
  • Remove a few dead assignments during router parsing. Found by
  • Add some forgotten return value checks during unit tests. Found
    by coverity.
  • Don't use 1-bit wide signed bit fields. Found by coverity.

Changes in version - 2011-06-04
Tor makes great progress towards a new stable release: we
fixed a big bug in whether relays stay in the consensus consistently,
we moved closer to handling bridges and hidden services correctly,
and we started the process of better handling the dreaded "my Vidalia
died, and now my Tor demands a password when I try to reconnect to it"
usability issue.

Major bugfixes:

  • Don't decide to make a new descriptor when receiving a HUP signal.
    This bug has caused a lot of 0.2.2.x relays to disappear from the
    consensus periodically. Fixes the most common case of triggering
    bug 1810; bugfix on
  • Actually allow nameservers with IPv6 addresses. Fixes bug 2574.
  • Don't try to build descriptors if "ORPort auto" is set and we
    don't know our actual ORPort yet. Fix for bug 3216; bugfix on
  • Resolve a crash that occurred when setting BridgeRelay to 1 with
    accounting enabled. Fixes bug 3228; bugfix on
  • Apply circuit timeouts to opened hidden-service-related circuits
    based on the correct start time. Previously, we would apply the
    circuit build timeout based on time since the circuit's creation;
    it was supposed to be applied based on time since the circuit
    entered its current state. Bugfix on 0.0.6; fixes part of bug 1297.
  • Use the same circuit timeout for client-side introduction
    circuits as for other four-hop circuits, rather than the timeout
    for single-hop directory-fetch circuits; the shorter timeout may
    have been appropriate with the static circuit build timeout in
    0.2.1.x and earlier, but caused many hidden service access attempts
    to fail with the adaptive CBT introduced in Bugfix
    on; fixes another part of bug 1297.
  • In ticket 2511 we fixed a case where you could use an unconfigured
    bridge if you had configured it as a bridge the last time you ran
    Tor. Now fix another edge case: if you had configured it as a bridge
    but then switched to a different bridge via the controller, you
    would still be willing to use the old one. Bugfix on;
    fixes bug 3321.

Major features:

  • Add an __OwningControllerProcess configuration option and a
    TAKEOWNERSHIP control-port command. Now a Tor controller can ensure
    that when it exits, Tor will shut down. Implements feature 3049.
  • If "UseBridges 1" is set and no bridges are configured, Tor will
    now refuse to build any circuits until some bridges are set.
    If "UseBridges auto" is set, Tor will use bridges if they are
    configured and we are not running as a server, but otherwise will
    make circuits as usual. The new default is "auto". Patch by anonym,
    so the Tails LiveCD can stop automatically revealing you as a Tor
    user on startup.

Minor bugfixes:

  • Fix warnings from GCC 4.6's "-Wunused-but-set-variable" option.
  • Remove a trailing asterisk from "exit-policy/default" in the
    output of the control port command "GETINFO info/names". Bugfix
  • Use a wide type to hold sockets when built for 64-bit Windows builds.
    Fixes bug 3270.
  • Warn when the user configures two HiddenServiceDir lines that point
    to the same directory. Bugfix on 0.0.6 (the version introducing
    HiddenServiceDir); fixes bug 3289.
  • Remove dead code from rend_cache_lookup_v2_desc_as_dir. Fixes
    part of bug 2748; bugfix on
  • Log malformed requests for rendezvous descriptors as protocol
    warnings, not warnings. Also, use a more informative log message
    in case someone sees it at log level warning without prior
    info-level messages. Fixes the other part of bug 2748; bugfix
  • Clear the table recording the time of the last request for each
    hidden service descriptor from each HS directory on SIGNAL NEWNYM.
    Previously, we would clear our HS descriptor cache on SIGNAL
    NEWNYM, but if we had previously retrieved a descriptor (or tried
    to) from every directory responsible for it, we would refuse to
    fetch it again for up to 15 minutes. Bugfix on;
    fixes bug 3309.
  • Fix a log message that said "bits" while displaying a value in
    bytes. Found by wanoskarnet. Fixes bug 3318; bugfix on
  • When checking for 1024-bit keys, check for 1024 bits, not 128
    bytes. This allows Tor to correctly discard keys of length 1017
    through 1023. Bugfix on 0.0.9pre5.

Minor features:

  • Relays now log the reason for publishing a new relay descriptor,
    so we have a better chance of hunting down instances of bug 1810.
    Resolves ticket 3252.
  • Revise most log messages that refer to nodes by nickname to
    instead use the "$key=nickname at address" format. This should be
    more useful, especially since nicknames are less and less likely
    to be unique. Resolves ticket 3045.
  • Log (at info level) when purging pieces of hidden-service-client
    state because of SIGNAL NEWNYM.

Removed options:

  • Remove undocumented option "-F" from tor-resolve: it hasn't done
    anything since

Tails 0.7.2 Released

An update to the fully anonymous operating system, Tails, is now available. Version 0.7.2 includes notable user-visible changes include:

* Iceweasel
o Disable Torbutton's external application launch warning. ... which advises using Tails. Tails is running Tails.
o FoxyProxy: install from Debian instead of the older one we previously shipped.

* Software
o Upgrade Linux kernel to Debian's 2.6.32-34squeeze1: fixes tons of bugs, closes a few security holes at well.
o haveged: install an official Debian backport instead of a custom backport.
o unrar: install the version from Debian's non-free repository. Users report unrar-free does not work well enough.

Plus the usual bunch of minor bug reports and improvements. It can be downloaded from or via bittorrent to save everyone some bandwidth.

The fully detailed changelog can be found here,;a=blob_plain;f=debian/changelog;hb=...

Improving Private Browsing Modes: "Do-Not-Track" vs Real Privacy by Design

Updated 06/16/2011: Break off into its own linkability issue. While ultimately it should be handled identically to the referer, that was not clear in the original text.
Updated 07/01/2011: Add link to article about 81% consumer polling rate in favor of some form of Do Not Track...

As I said in my previous post, the Tor Project hopes to work on a set of patches that effectively improves the Private Browsing Mode of Firefox. Long term, we'd love to merge these patches upstream, and/or see them obsoleted by better implementations.

To help keep everyone on the same page with respect to this effort, I've decided to take some time to describe what we envision as our ideal private browsing mode.

Hopefully, such a mode would be useful for more than just Tor users. Indeed, there are many ways to obtain varying levels of IP address privacy once you have solid browser support for privacy by design. The average user is quite capable of going to a cafe and enabling private mode, and this ability can be explained to them in a single sentence by the UI. Arguably they can also obtain low-grade IP privacy simply by tethering to their cell phone, whose IP typically changes regularly. I am told that frequent IP rotation is also the norm for residential connections in Germany and much of the EU to deter services and malware. This is not to mention all of the commercial single-hop VPN and proxy privacy services out there that fail to provide actual browser privacy in their tools.

We believe that the attention surrounding the "Do-Not-Track" header also indicates that network privacy is an important feature. However, we believe that it must be provided by design, as opposed to via a humble request to the adversary that is impossible to audit or enforce, especially outside of the United States. In his presentation at W2SP, Balachander Krishnamurthy compared the "Do-Not-Track" request header to the real-world equivalent of leaving your door unlocked with a posted notice that reads "Do-Not-Rob". While many people actually do post "No Trespassing" and similar signs, no one expects these signs to replace actual security measures.

Unfortunately, right now the only usable and effective web privacy option for the average user is to install an ad-blocker or similar software. Personally, I see the need for an ad-blocker to achieve privacy as a huge failure of the web itself. If web tracking, profiling, and behavioral targeting is so extreme that it cannot be avoided except by blocking all ads, then the prevailing revenue model of the web is unsustainable. We must figure out a way to do non-intrusive, content-relevant advertising while still providing privacy, without relying on regulatory action that is unlikely to be enforceable.

Ok, so enough preaching. What does privacy by design look like? I'm going to describe a list of 7 key properties, some of which the major browser vendors already have or are working towards, but so far are not uniformly deployed in any browser, including even our own Tor Browser.

  1. Make local privacy optional
  2. Avoid Linkability: Minimize privacy options, plugins and addons
  3. Avoid Linkability: Isolate all non-private mode identifiers and state
  4. Avoid Linkability: Isolate state per top-level domain
  5. Avoid Linkability: Reduce fingerprintable attributes
  6. Avoid Linkability: Reduce default referer information
  7. Avoid Linkability: Restrict to referer policy

Make local privacy optional

The browser vendors got it half right the first time around. There are many users who consider local storage privacy to be the primary feature they want from private browsing mode. However, we believe that local privacy is actually an orthogonal feature to network privacy: some users want both, but some only want one or the other.

Therefore, we believe that users should be given the option in private browsing mode to choose if they want to record browsing history or not. Many users will want to use private mode regularly, and will only be concerned about ad network tracking as opposed to local storage (ie similar to the "Do-Not-Track" header's use case). These users will still want history and "awesome bar" functionality to work for them. Almost all users will want to maintain access to their bookmarks and previously stored history from within the mode.

Avoid Linkability: Minimize privacy options, plugins, and addons

Beyond the choice to store history and activity on disk, there should not be numerous global options provided to private browsing modes.

Each option that detectably alters browser behavior can be used as a fingerprinting tool on the part of ad networks. Similarly, all extensions should be disabled in the mode except as an opt-in basis.

Instead of global browser privacy options, privacy decisions should be made per top-level url-bar domain to eliminate the possibility of linkability between domains. For example, when a plugin object (or a JavaScript access of window.plugins) is present in a page, the user should be given the choice of allowing that plugin object for that top-level url-bar domain only. The same goes for exemptions to third party cookie policy, geo-location, and any other privacy permissions.

If the user has indicated they do not care about local history storage, these permissions can be written to disk. Otherwise, they should remain memory-only.

Avoid Linkability: Isolate all non-private identifiers and state

All major browsers already make some effort to isolate explicit identifier state between non-private and private browsing (despite protest that their threat model does not actually require it). Obviously, privacy by design requires that this effort be continued.

The ability to link users between private and non-private browsing modes via explicit identifiers, browser state, or TLS state should be considered a flaw in the mode. After all, the user may have gone to a wifi cafe to obtain IP address privacy, expecting identifier privacy from their browser. It is not fair to the user to abjectly fail to protect them in this case.

Avoid Linkability: Isolate state to top-level domain

However, users who want continuous "Do-Not-Track"-style privacy will likely use the mode regularly, possibly even exclusively, to avoid behavior advertising and associated tracking. These users will also want to reduce the linkability between arbitrary sites they visit.

This is a particular concern for Tor as many activists use web-based email, social networking sites, and other web services for organizing. Their activity in Tor Browser on one site should not trivially de-anonymize their activity on another site to ad networks and exits.

To provide this property, all identifiers and state must be isolated to the top-level url bar domain, starting with cookies, but extending to the cache, DOM Storage, client certificates, and HTTP auth.

The benefit of this approach comes not only in the form of reduced linkability, but also in terms of simplified privacy UI. If all stored browser state and permissions become associated with the top-level url-bar domain, the six or seven different pieces of privacy UI governing these identifiers and permissions can become just one piece of UI, possibly with a context-menu option to drill down into specific types of state.

We also believe that such an identifier model makes privacy relationships much more clear to the average user. Instead of having various disjoint relationships with and permissions for hundreds of omnipresent third-party domains, users will have one relationship with each of the top-level url-bar domains that they choose to interact with and authenticate to.

Obviously, the downside of this enhanced protection against identifier linkability is that third party services that rely on third party cookie transmission may be impeded by this model. Long term, the hope is for standardized, in-browser support for services federated login and "Like" buttons. Google Chrome has actually implemented a feature called Web-Send that provides this functionality in a privacy preserving way. They have even written a legacy HTML5 version that provides the same privacy properties, save for the need to trust for DOM Storage.

We are also trying to introduce the notion of "protected cookies" in the alpha Tor Browser series, to allow users to specify they want to maintain a relationship with certain sites but not with others. To simplify this experience, we've currently entirely disabled Third Party Cookies in Tor Browser, but we believe that this may end up breaking mashup and federated login sites that might still be able to function under the more lenient double-keyed cookie model.

Several other interim steps are possible in the meantime. One could imagine iframe attributes that cause the browser chrome to request that a site be granted permission to set top-level cookies, or even a fully automated client-side mechanism that performs this promotion automatically for selected sites on mouse-click (such a mechanism is actually being prototyped by researchers right now).

Avoid Linkability: Reduce fingerprintable attributes

Once the linkability via explicit identifiers is eliminated, it becomes important to address the linkability that is possible through browser fingerprinting.

Fingerprinting is a difficult issue to address, but that difficulty does not preclude a best-effort from being made at eliminating or mitigating the major culprits.

Luckily, the major culprit is plugin-provided information. Once plugins are restricted to only permitted top-level domains, fingerprinting linkability gets effectively reduced to the information available via CSS, Javascript, and HTTP headers.

The largest culprits in CSS and Javascript are resolution and media information (especially those properties that also provide information about the device and display as opposed to limiting information to the properties of the rendering window itself), the number of fonts that can be loaded per origin, time-based fingerprints, and WebGL device information.

It is likely that we need another Panopticlick-style study that focuses exclusively on CSS and Javascript to determine the relative importance of these components, but most of them can be addressed without serious breakage of functionality.

Avoid Linkability: Reduce default referer information

So far, the Tor Project has refrained from restricting referer primarily because we believe that restricting referer actually becomes less necessary if identifiers are isolated and linkability has been reduced.

However, non-Tor users do have one important element of linkability: IP address. Even those that have an alternate Internet connection may still be bound to a single alternate IP. These users probably actually benefit in a real way from a restricted referer. It turns out a lot of information is already smuggled or leaked via referrer and URL parameters to third party sites, either deliberately or accidentally.

Referer restriction could take multiple forms, but we believe more site flexibility is key. Sites actually have no way to restrict referer for most element types currently, and conversely, sites will always be able to subvert referer restrictions by smuggling the same data in POST or URL parameters.

Therefore, we believe that referers should be restricted by default in private browsing mode using a same-origin policy where sites from different origins get either no referer, or a referer that is truncated to the top-level domain. However, sites should also be allowed to request an exemption to this rule on a per-site basis using an html attribute, which could trigger a chrome permissions request, or simply be granted automatically (on the assumption that they could just URL smuggle the data).

While this may not seem like much of a protection, at least it allows us to differentiate negligence from deliberate information sharing, and to restrict information leakage in the default scenario. Again, because this data can always be transmitted between elements either directly or via a back-channel, it is better it be visible and apparent than covert.

Avoid Linkability: Restrict to referer policy poses many of the same conflicts as referer information. It gives sites a way to pass data between pages in the navigation lifespan of a tab. Sites can use to store data, but are given no way to clear it easily. Hence it becomes very hard to differentiate deliberate data exchange from accidental leakage.

Just like referer, it is obvious that should be empty whenever the URL bar is rewritten by the user. There should be no legitimate, functional need for data exchange between two arbitrary random user-typed URL bar domains in any situation. on user-entered URLs should be cleared regardless of any changes to existing referer policy.

Similarly, sites could be given the option to allow transmission of to third party iframe elements, but the default should be to isolate to the same origin policy. We do not believe this second step is required for Tor usage, but it may be helpful to non-Tor users, for similar reasons as the referer leakage.

As such, our current plan is to bind's lifespan to the Referer header contents in our addon implementations.


We believe that privacy can be a differentiating feature for browsers. Even early studies revealed that many users immediately began using private browsing modes regularly, either by mistake or deliberately.

We believe that many of these users deliberately use private browsing in cafes and on other alternate Internet connections assuming that they are being protected from ad tracking and behavioral analysis. We welcome user studies to determine what users actually expect and want from private browsing modes for the definitive answer, but obviously we're pretty convinced what the outcome will be.

Privacy by design represents the technical realization of "Do-Not-Track": the ability to actually opt-out (prevent) a complete behavior profile from being built to record and model your specific web viewing habits.

In order for a private browsing mode to succeed in actually providing privacy by design, it must reduce activity linkability in all forms. Six out of the seven items mentioned above are really linkability issues at their core. Reducing the ability of the adversary to link private activity to non-private activity and also to other private activity is what privacy by design is all about. This reduction in linkability is what prevents a behavioral profile from being constructed.

The Tor Project looks forward to a day where privacy by design becomes a key feature of major browsers. We would love to be able to ship a vastly simplified browser extension that contains only a compiled Tor binary and some minimal addon code that simply "upgrades" the user's private browsing mode into a fully functional anonymous mode. The ability to do this would vastly simplify our package offerings, and make it significantly easier to get our software into censored and oppressed regions.

However, until then, we must do our best to attempt to provide software that we believe will provide the privacy and security that users have come to expect from us. For now, this means shipping our own browser.

May 2011 Progress Report

The May 2011 progress report is available at the bottom of this post and at

Highlights include an experimental tor release, experimental vidalia release, some timing attack resistance work, some ECC work, updates on obfsproxy, and a datagram protocol comparison.

IPv6 and Tor Project Websites

In February 2011, we enabled IPv6 access to the majority of our websites, such as,,,,, and We've seen a growing amount of ipv6 traffic going to www, gitweb, and archive since then.

However, this isn't what people think about, nor want to hear about, when thinking of Tor and IPv6. Nick posted our efforts on this front in late April. There is an opportunity for someone to make a big impact by helping us enable ipv6 in the tor codebase. We're keeping our thoughts in a draft proposal. We have more thoughts about supporting IPv6 exits written down in 2007.

If IPv6 is the future, protecting your privacy online should be part of that future. We're looking for help to make it happen.

New Tor Browser Bundles

All of the alpha Tor Browser Bundles have been updated to the latest Tor

Firefox 3.6 Tor Browser Bundles

Linux bundles
1.1.9: Released 2011-05-19

  • Update Tor to
  • Update NoScript to
  • Update BetterPrivacy to 1.50
  • Update HTTPS Everywhere to 0.9.9.development.5

OS X bundle
1.0.17: Released 2011-05-19

  • Update Tor to
  • Update NoScript to
  • Update HTTPS-Everywhere to 0.9.9.development.5
  • Update BetterPrivacy to 1.50

Firefox 4 Tor Browser Bundles

Tor Browser Bundle (2.2.27-1)

  • Update Tor to
  • Update HTTPS Everywhere to 0.9.9.development.5
  • Update NoScript to

Temporary direct download links for Firefox 4 bundles:

Tor and are out

Changes in version - 2011-05-18
Tor fixes a bridge-related stability bug in the previous
release, and also adds a few more general bugfixes.

Major bugfixes:

  • Fix a crash bug when changing bridges in a running Tor process.
    Fixes bug 3213; bugfix on

  • When the controller configures a new bridge, don't wait 10 to 60
    seconds before trying to fetch its descriptor. Bugfix on; fixes bug 3198 (suggested by 2355).

    Minor bugfixes:

  • Require that onion keys have exponent 65537 in microdescriptors too.
    Fixes more of bug 3207; bugfix on

  • Tor used to limit HttpProxyAuthenticator values to 48 characters.
    Changed the limit to 512 characters by removing base64 newlines.
    Fixes bug 2752. Fix by Michael Yakubovich.

  • When a client starts or stops using bridges, never use a circuit
    that was built before the configuration change. This behavior could
    put at risk a user who uses bridges to ensure that her traffic
    only goes to the chosen addresses. Bugfix on; fixes
    bug 3200.

    Changes in version - 2011-05-17
    Tor fixes a variety of potential privacy problems. It
    also introduces a new "socksport auto" approach that should make it
    easier to run multiple Tors on the same system, and does a lot of
    cleanup to get us closer to a release candidate.

    Security/privacy fixes:

    • Replace all potentially sensitive memory comparison operations
      with versions whose runtime does not depend on the data being
      compared. This will help resist a class of attacks where an
      adversary can use variations in timing information to learn
      sensitive data. Fix for one case of bug 3122. (Safe memcmp
      implementation by Robert Ransom based partially on code by DJB.)
    • When receiving a hidden service descriptor, check that it is for
      the hidden service we wanted. Previously, Tor would store any
      hidden service descriptors that a directory gave it, whether it
      wanted them or not. This wouldn't have let an attacker impersonate
      a hidden service, but it did let directories pre-seed a client
      with descriptors that it didn't want. Bugfix on 0.0.6.
    • On SIGHUP, do not clear out all TrackHostExits mappings, client
      DNS cache entries, and virtual address mappings: that's what
      NEWNYM is for. Fixes bug 1345; bugfix on

    Major features:

    • The options SocksPort, ControlPort, and so on now all accept a
      value "auto" that opens a socket on an OS-selected port. A
      new ControlPortWriteToFile option tells Tor to write its
      actual control port or ports to a chosen file. If the option
      ControlPortFileGroupReadable is set, the file is created as
      group-readable. Now users can run two Tor clients on the same
      system without needing to manually mess with parameters. Resolves
      part of ticket 3076.
    • Set SO_REUSEADDR on all sockets, not just listeners. This should
      help busy exit nodes avoid running out of useable ports just
      because all the ports have been used in the near past. Resolves
      issue 2850.

    Minor features:

    • New "GETINFO net/listeners/(type)" controller command to return
      a list of addresses and ports that are bound for listeners for a
      given connection type. This is useful when the user has configured
      "SocksPort auto" and the controller needs to know which port got
      chosen. Resolves another part of ticket 3076.
    • Add a new ControlSocketsGroupWritable configuration option: when
      it is turned on, ControlSockets are group-writeable by the default
      group of the current user. Patch by Jérémy Bobbio; implements
      ticket 2972.
    • Tor now refuses to create a ControlSocket in a directory that is
      world-readable (or group-readable if ControlSocketsGroupWritable
      is 0). This is necessary because some operating systems do not
      enforce permissions on an AF_UNIX sockets. Permissions on the
      directory holding the socket, however, seems to work everywhere.
    • Rate-limit a warning about failures to download v2 networkstatus
      documents. Resolves part of bug 1352.
    • Backport code from 0.2.3.x that allows directory authorities to
      clean their microdescriptor caches. Needed to resolve bug 2230.
    • When an HTTPS proxy reports "403 Forbidden", we now explain
      what it means rather than calling it an unexpected status code.
      Closes bug 2503. Patch from Michael Yakubovich.
    • Update to the May 1 2011 Maxmind GeoLite Country database.

    Minor bugfixes:

    • Authorities now clean their microdesc cache periodically and when
      reading from disk initially, not only when adding new descriptors.
      This prevents a bug where we could lose microdescriptors. Bugfix
      on 2230
    • Do not crash when our configuration file becomes unreadable, for
      example due to a permissions change, between when we start up
      and when a controller calls SAVECONF. Fixes bug 3135; bugfix
      on 0.0.9pre6.
    • Avoid a bug that would keep us from replacing a microdescriptor
      cache on Windows. (We would try to replace the file while still
      holding it open. That's fine on Unix, but Windows doesn't let us
      do that.) Bugfix on; bug found by wanoskarnet.
    • Add missing explanations for the authority-related torrc options
      RephistTrackTime, BridgePassword, and V3AuthUseLegacyKey in the
      man page. Resolves issue 2379.
    • As an authority, do not upload our own vote or signature set to
      ourself. It would tell us nothing new, and as of,
      it would get flagged as a duplicate. Resolves bug 3026.
    • Accept hidden service descriptors if we think we might be a hidden
      service directory, regardless of what our consensus says. This
      helps robustness, since clients and hidden services can sometimes
      have a more up-to-date view of the network consensus than we do,
      and if they think that the directory authorities list us a HSDir,
      we might actually be one. Related to bug 2732; bugfix on
    • When a controller changes TrackHostExits, remove mappings for
      hosts that should no longer have their exits tracked. Bugfix on
    • When a controller changes VirtualAddrNetwork, remove any mappings
      for hosts that were automapped to the old network. Bugfix on
    • When a controller changes one of the AutomapHosts* options, remove
      any mappings for hosts that should no longer be automapped. Bugfix
    • Do not reset the bridge descriptor download status every time we
      re-parse our configuration or get a configuration change. Fixes
      bug 3019; bugfix on

    Minor bugfixes (code cleanup):

    • When loading the microdesc journal, remember its current size.
      In 0.2.2, this helps prevent the microdesc journal from growing
      without limit on authorities (who are the only ones to use it in
      0.2.2). Fixes a part of bug 2230; bugfix on
      Fix posted by "cypherpunks."
    • The microdesc journal is supposed to get rebuilt only if it is
      at least _half_ the length of the store, not _twice_ the length
      of the store. Bugfix on; fixes part of bug 2230.
    • Fix a potential null-pointer dereference while computing a
      consensus. Bugfix on tor-, found with the help of
      clang's analyzer.
    • Avoid a possible null-pointer dereference when rebuilding the mdesc
      cache without actually having any descriptors to cache. Bugfix on Issue discovered using clang's static analyzer.
    • If we fail to compute the identity digest of a v3 legacy keypair,
      warn, and don't use a buffer-full of junk instead. Bugfix on; fixes bug 3106.
    • Resolve an untriggerable issue in smartlist_string_num_isin(),
      where if the function had ever in the future been used to check
      for the presence of a too-large number, it would have given an
      incorrect result. (Fortunately, we only used it for 16-bit
      values.) Fixes bug 3175; bugfix on
    • Require that introduction point keys and onion handshake keys
      have a public exponent of 65537. Starts to fix bug 3207; bugfix

    Removed features:

    • Caches no longer download and serve v2 networkstatus documents
      unless FetchV2Networkstatus flag is set: these documents haven't
      haven't been used by clients or relays since 0.2.0.x. Resolves
      bug 3022.
  • A visit to Iceland

    I spent two days in Iceland discussing Tor and freedom of information with various people. I talked to a few people, including a member of Icelandic Parliament, about the International Modern Media Institute, The goals of IMMI are to secure free speech and defining new operating principles for the global media. They are starting with Iceland and moving on to the world. They already have much success in Iceland, but are running into issues of scale and funding. They could use some help.

    The second day I talked to the computer forensics team from the National Police of Iceland about Tor, We discussed all things Tor and their experiences with it. Apparently there are 'computer specialists' traveling Europe talking to law enforcement (for great profit) disparaging any technology that provides security and privacy to citizens as 'for child abuse and organized crime'. These people neglect to mention that all technologies are dual usage and the human behind it determines the good or bad usage of the technology. One of the officers mentions that no one talks about crowbar crime, but everyone is talking about computer crime as if humans aren't involved. Overall, it was a great discussion lasting a few hours.

    I then head over to work with the people from 1984, They are one of the largest hosting providers in Iceland. And thanks to them, we now have hosted in the country. I learned more about the physical infrastructure of the Internet in Iceland. We discussed ways to increase competition now that the Iceland Govt bailed out the company that owns nearly all of the fiber in the country. Imagine a country with fiber everywhere (already true in Iceland) and treating it like the road infrastructure with any provider getting access to it. Now mix in successful freedom of expression laws from IMMI.

    That night I talked about Tor to the only hackerspace in Iceland at their beer and crypto night at Hakkavélin, Someone showed up and recorded my entire talk until their battery ran out. kapteinnkrokur posted the video at I covered Tor topics, life under surveillance, and some more advanced topics relating to bridges, ssl filtering, and attempted DHT directory info over Tor. Afterwards, many went out to a bar to talk more until 2 AM. I had a great chat with Bjarni and Ewelina from PageKite,, about Tor marketing, supporting privacy enhancing technology, and peer to peer collaboration for all.

    Iceland is a fantastic country and the people are great. I hope to spend more time there, as soon as the volcanoes stop disrupting flights.

    Thank you to Björgvin, Birgitta, Berglind, and Mörður for arranging meetings and hosting me for the two days.

    Syndicate content Syndicate content