Mission Improbable: Hardening Android for Security And Privacy

Updates: See the Changes section for a list of changes since initial posting.

After a long wait, the Tor project is happy to announce a refresh of our Tor-enabled Android phone prototype.

This prototype is meant to show a possible direction for Tor on mobile. While I use it myself for my personal communications, it has some rough edges, and installation and update will require familiarity with Linux.

The prototype is also meant to show that it is still possible to replace and modify your mobile phone's operating system while retaining verified boot security - though only just barely. The Android ecosystem is moving very fast, and in this rapid development, we are concerned that the freedom of users to use, study, share, and improve the operating system software on their phones is being threatened. If we lose these freedoms on mobile, we may never get them back. This is especially troubling as mobile access to the Internet becomes the primary form of Internet usage worldwide.

Quick Recap

We are trying to demonstrate that it is possible to build a phone that respects user choice and freedom, vastly reduces vulnerability surface, and sets a direction for the ecosystem with respect to how to meet the needs of high-security users. Obviously this is a large task. Just as with our earlier prototype, we are relying on suggestions and support from the wider community.

Help from the Community

When we released our first prototype, the Android community exceeded our wildest expectations with respect to their excitement and contributions. The comments on our initial blog post were filled with helpful suggestions.

Soon after that post went up, Cédric Jeanneret took my Droidwall scripts and adapted them into the very nice OrWall, which is exactly how we think a Tor-enabled phone should work in general. Users should have full control over what information applications can access on their phones, including Internet access, and have control over how that Internet access happens. OrWall provides the networking component of this access control. It allows the user to choose which apps route through Tor, which route through non-Tor, and which can't access the Internet at all. It also has an option to let a specific Voice over IP app, like Signal, bypass Tor for the UDP voice data channel, while still sending call setup information over Tor.

At around the time that our blog post went up, the Copperhead project began producing hardened builds of Android. The hardening features make it more difficult to exploit Android vulnerabilities, and also provides WiFi MAC address randomization, so that it is no longer trivial to track devices using this information.

Copperhead is also the only Android ROM that supports verified boot, which prevents exploits from modifying the boot, system, recovery, and vendor device partitions. Coppherhead has also extended this protection by preventing system applications from being overridden by Google Play Store apps, or from writing bytecode to writable partitions (where it could be modified and infected). This makes Copperhead an excellent choice for our base system.

The Copperhead Tor Phone Prototype

Upon the foundation of Copperhead, Orbot, Orwall, F-Droid, and other community contributions, we have built an installation process that installs a new Copperhead phone with Orbot, OrWall, SuperUser, Google Play, and MyAppList with a list of recommended apps from F-Droid.

We require SuperUser and OrWall instead of using the VPN APIs because the Android VPN APIs are still not as reliable as a firewall in terms of preventing leaks. Without a firewall-based solution, the VPN can leak at boot, or if Orbot is killed or crashes. Additionally, DNS leaks outside of Tor still occur with the VPN APIs on some systems.

We provide Google Play primarily because Signal still requires it, but also because some users probably also want apps from the Play Store. You do not need a Google account to use Signal, but then you need to download the Signal android package and sideload it manually (via adb install).

The need to install these components to the system partition means that we must re-sign the Copperhead image and updates if we want to keep the ability to have system integrity from Verified Boot.

Thankfully, the Nexus Devices supported by Copperhead allow the use of user-generated keys. The installation process simply takes a Copperhead image, installs our additional apps, and signs it with the new keys.

Systemic Threats to Software Freedom

Unfortunately, not only is Copperhead the only Android rebuild that supports Verified Boot, but the Google Nexus/Pixel hardware is the only Android hardware that allows the user to install their own keys to retain both the ability to modify the device, as well as have the filesystem security provided by verified boot.

This, combined with Google's increasing hostility towards Android as a fully Open Source platform, as well as the difficulty for external entities to keep up with Android's surprise release and opaque development processes, means that the ability for end-users to use, study, share, and improve the Android system are all in great jeopardy.

This all means that the Android platform is effectively moving to a "Look but don't touch" Shared Source model that Microsoft tried in the early 2000s. However, instead of being explicit about this, Google appears to be doing it surreptitiously. It is a very deeply disturbing trend.

It is unfortunate that Google seems to see locking down Android as the only solution to the fragmentation and resulting insecurity of the Android platform. We believe that more transparent development and release processes, along with deals for longer device firmware support from SoC vendors, would go a long way to ensuring that it is easier for good OEM players to stay up to date. Simply moving more components to Google Play, even though it will keep those components up to date, does not solve the systemic problem that there are still no OEM incentives to update the base system. Users of old AOSP base systems will always be vulnerable to library, daemon, and operating system issues. Simply giving them slightly more up to date apps is a bandaid that both reduces freedom and does not solve the root security problems. Moreover, as more components and apps are moved to closed source versions, Google is reducing its ability to resist the demand that backdoors be introduced. It is much harder to backdoor an open source component (especially with reproducible builds and binary transparency) than a closed source one.

If Google Play is to be used as a source of leverage to solve this problem, a far better approach would be to use it as a pressure point to mandate that OEMs keep their base system updated. If they fail to do so, their users will begin to lose Google Play functionality, with proper warning that notifies them that their vendor is not honoring their support agreement. In a more extreme version, the Android SDK itself could have compiled code that degrades app functionality or disables apps entirely when the base system becomes outdated.

Another option would be to change the license of AOSP itself to require that any parties that distribute binaries of the base system must provide updates to all devices for some minimum period of time. That would create a legal avenue for class-action lawsuits or other legal action against OEMs that make "fire and forget" devices that leave their users vulnerable, and endanger the Internet itself.

While extreme, both of these options would be preferable to completely giving up on free and open computing for the future of the Internet. Google should be competing on overall Google account integration experience, security, app selection, and media store features. They should use their competitive position to encourage/enforce good OEM behavior, not to create barriers and bandaids that end up enabling yet more fragmentation due to out of date (and insecure) devices.

It is for this reason that we believe that projects like Copperhead are incredibly important to support. Once we lose these freedoms on mobile, we may never get them back. It is especially troubling to imagine a future where mobile access to the Internet is the primary form of Internet usage, and for that usage, all users are forced to choose between having either security or freedom.

Hardware Choice

The hardware for this prototype is the Google Nexus 6P. While we would prefer to support lower end models for low income demographics, only the Nexus and Pixel lines support Verified Boot with user-controlled keys. We are not aware of any other models that allow this, but we would love to hear if there are any that do.

In theory, installation should work for any of the devices supported by Copperhead, but updating the device will require the addition of an updater-script and an adaptation of the for that device, to convert the radio and bootloader images to the OTA update format.

If you are not allergic to buying hardware online, we highly recommend that you order them from the Copperhead store. The devices are shipped with tamper-evident security tape, for what it's worth. Otherwise, if you're lucky, you might still be able to find a 6P at your local electronics retail store. Please consider donating to Copperhead anyway. The project is doing everything right, and could use your support.

Hopefully, we can add support for the newer Pixel devices as soon as AOSP (and Copperhead) supports them, too.


Before you dive in, remember that this is a prototype, and you will need to be familiar with Linux.

With the proper prerequisites, installation should be as simple as checking out the Mission Improbable git repository, and downloading a Copperhead factory image for your device.

The script should walk you through a series of steps, printing out instructions for unlocking the phone and flashing the system. Please read the instructions in the repository for full installation details.

The very first device boot after installation will take a while, so be patient. During this boot, you should note the fingerprint of your key on the yellow boot splash screen. That fingerprint is what authenticates the use of your key and the rest of the boot process.

Once the system is booted, after you have given Google Play Services the Location and Storage permissions (as per the instructions printed by the script), make sure you set the Date and Time accurately, or Orbot will not be able to connect to the Tor Network.

Then, you can start Orbot, and allow F-Droid, Download Manager, the Copperhead updater, Google Play Services (if you want to use Signal), and any other apps you want to access the network.

NOTE: To keep Orbot up to date, you will have to go into F-Droid Repositories option, and click Guardian Project Official Releases.

Installation: F-Droid apps

Once you have networking and F-Droid working, you can use MyAppList to install apps from F-Droid. Our installation provides a list of useful apps for MyAppList. The MyAppsList app will allow you to select the subset you want, and install those apps in succession by invoking F-Droid. Start this process by clicking on the upward arrow at the bottom right of the screen:

Alternately, you can add links to additional F-Droid packages in the apk url list prior to running the installation, and they will be downloaded and installed during

NOTE: Do not update OrWall past 1.1.0 via F-Droid until issue 121 is fixed, or networking will break.

Installation: Signal

Signal is one of the most useful communications applications to have on your phone. Unfortunately, despite being open source itself, Signal is not included in F-Droid, for historical reasons. Near as we can tell, most of the issues behind the argument have actually been since resolved. Now that Signal is reproducible, we see no reason why it can't be included in some F-Droid repo, if not the F-Droid repo, so long as it is the same Signal with the same key. It is unfortunate to see so much disagreement over this point, though. Even if Signal won't make the criterion for the official F-Droid repo (or wherever that tirefire of a flamewar is at right now), we wish that at the very least it could meet the criterion for an alternate "Non-Free" repo, much like the Debian project provides. Nothing is preventing the redistribution of the official Signal apk.

For now, if you do not wish to use a Google account with Google Play, it is possible to download the Signal apks from one of the apk mirror sites (such as APK4fun,, or To ensure that you have the official Signal apk, perform the following:

  1. Download the apk.
  2. Unzip the apk with unzip org.thoughtcrime.securesms.apk
  3. Verify that the signing key is the official key with keytool -printcert -file META-INF/CERT.RSA
  4. You should see a line with SHA256: 29:F3:4E:5F:27:F2:11:B4:24:BC:5B:F9:D6:71:62:C0 EA:FB:A2:DA:35:AF:35:C1:64:16:FC:44:62:76:BA:26
  5. Make sure that fingerprint matches (the space was added for formatting).
  6. Verify that the contents of that APK are properly signed by that cert with: jarsigner -verify org.thoughtcrime.securesms.apk. You should see jar verified printed out.

Then, you can install the Signal APK via adb with adb install org.thoughtcrime.securesms.apk. You can verify you're up to date with the version in the app store with ApkTrack.

For voice calls to work, select Signal as the SIP application in OrWall, and allow SIP access.


Because Verified Boot ensures filesystem integrity at the device block level, and because we modify the root and system filesystems, normal over the air updates will not work. The fact that we use different device keys will prevent the official updates from installing at all, but even if they did, they would remove the installation of Google Play, SuperUser, and the OrWall initial firewall script.

When the phone notifies you of an update, you should instead download the latest Copperhead factory image to the mission-improbable working directory, and use to convert it into a signed update zip that will get sideloaded and installed by the recovery. You need to have the same keys from the installation in the keys subdirectory.

The script should walk you through this process.

Updates may also reset the system clock, which must be accurate for Orbot to connect to the Tor network. If this happens, you may need to reset the clock manually under Date and Time Settings


I use this prototype for all of my personal communications - Email, Signal, XMPP+OTR, Mumble, offline maps and directions in OSMAnd, taking pictures, and reading news and books. I use Intent Intercept to avoid accidentally clicking on links, and to avoid surprising cross-app launching behavior.

For Internet access, I personally use a secondary phone that acts as a router for this phone while it is in airplane mode. That phone has an app store and I use it for less trusted, non-private applications, and for emergency situations should a bug with the device prevent it from functioning properly. However, it is also possible to use a cheap wifi cell router, or simply use the actual cell capabilities on the phone itself. In that case, you may want to look into CSipSimple, and a VoIP provider, but see the Future Work section about potential snags with using SIP and Signal at the same time.

I also often use Google Voice or SIP numbers instead of the number of my actual phone's SIM card just as a general protection measure. I give people this number instead of the phone number of my actual cell device, to prevent remote baseband exploits and other location tracking attacks from being trivial to pull off from a distance. This is a trade-off, though, as you are trusting the VoIP provider with your voice data, and on top of this, many of them do not support encryption for call signaling or voice data, and fewer still support SMS.

For situations where using the cell network at all is either undesirable or impossible (perhaps because it is disabled due to civil unrest), the mesh network messaging app Rumble shows a lot of promise. It supports both public and encrypted groups in a Twitter-like interface run over either a wifi or bluetooth ad-hoc mesh network. It could use some attention.

Future Work

Like the last post on the topic, this prototype obviously has a lot of unfinished pieces and unpolished corners. We've made a lot of progress as a community on many of the future work items from that last post, but many still remain.

Future work: More Device Support

As mentioned above, installation should work on all devices that Copperhead supports out of the box. However, updates require the addition of an updater-script and an adaptation of the for that device, to convert the radio and bootloader images to the OTA update format.

Future Work: MicroG support

Instead of Google Play Services, it might be nice to provide the Open Source MicroG replacements. This requires some hackery to spoof the Google Play Service Signature field, though. Unfortunately, this method creates a permission that any app can request to spoof signatures for any service. We'd be much happier about this if we could find a way for MicroG to be the only app to be able to spoof permissions, and only for the Google services it was replacing. This may be as simple as hardcoding those app ids in an updated version of one of these patches.

Future Work: Netfilter API (or better VPN APIs)

Back in the WhisperCore days, Moxie wrote a Netfilter module using libiptc that enabled apps to edit iptables rules if they had permissions for it. This would eliminate the need for iptables shell callouts for using OrWall, would be more stable and less leaky than the current VPN APIs, and would eliminate the need to have root access on the device (which is additional vulnerability surface). That API needs to be dusted off and updated for the Copperhead compatibility, and then Orwall would need to be updated to use it, if present.

Alternatively, the VPN API could be used, if there were ways to prevent leaks at boot, DNS leaks, and leaks if the app is killed or crashes. We'd also want the ability to control specific app network access, and allow bypass of UDP for VoIP apps.

Future Work: Fewer Binary Blobs

There are unfortunately quite a few binary blobs extracted from the Copperhead build tree in the repository. They are enumerated in the README. This was done for expedience. Building some of those components outside of the android build tree is fairly difficult. We would happily accept patches for this, or for replacement tools.

Future Work: F-Droid auto-updates, crash reporting, and install count analytics

These requests come from Moxie. Having these would make him much happier about F-Droid Signal installs.

It turns out that F-Droid supports full auto-updates with the Priviledged Extension, which Copperhead is working on including.

Future Work: Build Reproducibility

Copperhead itself is not yet built reproducibly. It's our opinion that this is the AOSP's responsibility, though. If it's not the core team at Google, they should at least fund Copperhead or some other entity to work on it for them. Reproducible builds should be an organizational priority for all software companies. Moreover, in combination with free software, they are an excellent deterrent against backdoors.

In this brave new world, even if we can trust that the NSA won't be ordered to attack American companies to insert backdoors, deteriorating relationships with China and other state actors may mean that their incentives to hold back on such attacks will be greatly reduced. Closed source components can also benefit from reproducible builds, since compromising multiple build systems/build teams is inherently harder than compromising just one.

Future Work: Orbot Stability

Unfortunately, the stability of Orbot itself still leaves a lot to be desired. It is fairly fragile to network disconnects. It often becomes stuck in states that require you to go into the Android Settings for Apps, and then Force Stop Orbot in order for it to be able to reconnect properly. The startup UI is also fragile to network connectivity.

Worse: If you tap the start button either too hard or multiple times while the network is disconnected or while the phone's clock is out of sync, Orbot can become confused and say that it is connected when it is not. Luckily, because the Tor network access security is enforce by Orwall (and the Android kernel), instabilities in Orbot do not risk Tor leaks.

Future Work: Backups and Remote Wipe

Unfortunately, backups are an unsolved problem. In theory, adb backup -all should work, but even the latest adb version from the official Android SDK appears to only backup and restore partial data. Apparently this is due to adb obeying manifest restrictions on apps that request not to be backed up. For the purposes of full device backup, it would be nice to have an adb version that really backed up everything.

Instead, I use the export feature of K-9 Mail, Contacts, and the Calendar Import-Export app to export that data to /sdcard, and then adb pull /sdcard. It would be nice to have an end-to-end encrypted remote backup app, though. Flock had promise, but was unfortunately discontinued.

Similarly, if a phone is lost, it would be nice to have a cryptographically secure remote wipe feature.

Future Work: Baseband Analysis (and Isolation)

Until phones with auditable baseband isolation are available (the Neo900 looks like a promising candidate), the baseband remains a problem on all of these phones. It is unknown if vulnerabilities or backdoors in the baseband can turn on the mic, make silent calls, or access device memory. Using a portable hotspot or secondary insecure phone is one option for now, but it is still unknown if the baseband is fully disabled in airplane mode. In the previous post, commenters recommended wiping the baseband, but on most phones, this seems to also disable GPS.

It would be useful to audit whether airplane mode fully disables the baseband using either OpenBTS, OsmocommBB, or a custom hardware monitoring device.

Future Work: Wifi AP Scanning Prevention

Copperhead may randomize the MAC address, but it is quite likely that it still tries to connect to configured APs, even if they are not there (see these two XDA threads). This can reveal information about your home and work networks, and any other networks you have configured.

There is a Wifi Privacy Police App in F-Droid, and Smarter WiFi may be other options, but we have not yet had time to audit/test either. Any reports would be useful here.

Future Work: Port Tor Browser to Android

The Guardian Project is undertaking a port of Tor Browser to Android as part of their OrFox project. This port is still incomplete, however. The Tor Project is working on obtaining funding to bring it on par with the desktop Tor Browser.

Future Work: Better SIP Support

Right now, it is difficult to use two or more SIP clients in OrWall. You basically have to switch between them in the settings, which is also fragile and error prone. It would be ideal if OrWall allowed multiple SIP apps to be selected.

Additionally, SIP providers and SIP clients have very poor support for TLS and SRTP encryption for call setup and voice data. I could find only two such providers that advertised this support, but I was unable to actually get TLS and SRTP working with CSipSimple or LinPhone for either of them.

Future Work: Installation and full OTA updates without Linux

In order for this to become a real end-user phone, we need to remove the requirement to use Linux in order to install and update it. Unfortunately, this is tricky. Technically, Google Play can't be distributed in a full Android firmware, so we'd have to get special approval for that. Alternatively, we could make the default install use MicroG, as above. In either case, it should just be a matter of taking the official Copperhead builds, modifying them, changing the update URL, and shipping those devices with Google Play/MicroG and the new OTA location. Copperhead or Tor could easily support multiple device install configurations this way without needing to rebuild everything for each one. So legal issues aside, users could easily have their choice of MicroG, Google Play, or neither.

Personally, I think the demand is higher for some level of Google account integration functionality than what MicroG provides, so it would be nice to find some way to make that work. But there are solid reasons for avoiding the use of a Google account (such as Google's mistreatment of Tor users, the unavailability of Google in certain areas of the world due to censorship of Google, and the technical capability of Google Play to send targeted backdoored versions of apps to specific accounts).

Future Work: Better Boot Key Representation/Authentication

The truncated fingerprint is not the best way to present a key to the user. It is both too short for security, and too hard to read. It would be better to use something like the SSH Randomart representation, or some other visual representation that encodes a cryptographically strong version of the key fingerprint, and asks the user to click through it to boot. Though obviously, if this boot process can also be modified, this may be insufficient.

Future Work: Faster GPS Lock

The GPS on these devices is device-only by default, which can mean it is very slow. It would be useful to find out if µg UnifiedNlp can help, and which of its backends are privacy preserving enough to recommend/enable by default.

Future Work: Sensor Management/Removal

As pointed out in great detail in one of the comments below, these devices have a large number of sensors on them that can be used to create side channels, gather information about the environment, and send it back. The original Mission Impossible post went into quite a bit of detail about how to remove the microphone from the device. This time around, I focused on software security. But like the commentor suggested, you can still go down the hardware modding rabbithole if you like. Just search YouTube for teardown nexus 6P, or similar.

Changes Since Initial Posting

Like the last post, this post will likely be updated for a while based on community feedback. Here is the list of those changes so far.

  1. Added information about secondary SIP/VoIP usage in the Usage section and the Future Work sections.
  2. Added a warning not to upgrade OrWall until Issue 121 is fixed.
  3. Describe how we could remove the Linux requirement and have OTA updates, as a Future Work item.
  4. Remind users to check their key fingerprint at installation and boot, and point out in the Future Work section that this UI could be better.
  5. Mention the Neo900 in the Future Work: Baseband Isolation section
  6. Wow, the Signal vs F-Droid issue is a stupid hot mess. Can't we all just get along and share the software? Don't make me sing the RMS song, people... I'll do it...
  7. Added a note that you need the Guardian Project F-Droid repo to update Orbot.
  8. Add a thought to the Systemic Threats to Software Freedom section about using licensing to enforce the update requirement in order to use the AOSP.
  9. Mention ApkTrack for monitoring for Signal updates, and Intent Intercept for avoiding risky clicks.
  10. Mention alternate location providers as Future Work, and that we need to pick a decent backend.
  11. Link to Conversations and some other apps in the usage section. Also add some other links here and there.
  12. Mention that Date and Time must be set correctly for Orbot to connect to the network.
  13. Added a link to Moxie's netfilter code to the Future Work section, should anyone want to try to dust it off and get it working with Orwall.
  14. Use keytool instead of sha256sum to verify the Signal key's fingerprint. The CERT.RSA file is not stable across versions.
  15. The latest Orbot 15.2.0-rc8 still has issues claiming that it is connected when it is not. This is easiest to observe if the system clock is wrong, but it can also happen on network disconnects.
  16. Add a Future Work section for sensor management/removal

Mission Impossible: Hardening Android for Security and Privacy

Updates: See the Changes section for a list of changes since initial posting.

This post has been updated further by the November 2016 Refresh of the same idea

Executive Summary

The future is here, and ahead of schedule. Come join us, the weather's nice.

This blog post describes the installation and configuration of a prototype of a secure, full-featured, Android telecommunications device with full Tor support, individual application firewalling, true cell network baseband isolation, and optional ZRTP encrypted voice and video support. ZRTP does run over UDP which is not yet possible to send over Tor, but we are able to send SIP account login and call setup over Tor independently.

The SIP client we recommend also supports dialing normal telephone numbers if you have a SIP gateway that provides trunking service.

Aside from a handful of binary blobs to manage the device firmware and graphics acceleration, the entire system can be assembled (and recompiled) using only FOSS components. However, as an added bonus, we will describe how to handle the Google Play store as well, to mitigate the two infamous Google Play Backdoors.


Android is the most popular mobile platform in the world, with a wide variety of applications, including many applications that aid in communications security, censorship circumvention, and activist organization. Moreover, the core of the Android platform is Open Source, auditable, and modifiable by anyone.

Unfortunately though, mobile devices in general and Android devices in particular have not been designed with privacy in mind. In fact, they've seemingly been designed with nearly the opposite goal: to make it easy for third parties, telecommunications companies, sophisticated state-sized adversaries, and even random hackers to extract all manner of personal information from the user. This includes the full content of personal communications with business partners and loved ones. Worse still, by default, the user is given very little in the way of control or even informed consent about what information is being collected and how.

This post aims to address this, but we must first admit we stand on the shoulders of giants. Organizations like Cyanogen, F-Droid, the Guardian Project, and many others have done a great deal of work to try to improve this situation by restoring control of Android devices to the user, and to ensure the integrity of our personal communications. However, all of these projects have shortcomings and often leave gaps in what they provide and protect. Even in cases where proper security and privacy features exist, they typically require extensive configuration to use safely, securely, and correctly.

This blog post enumerates and documents these gaps, describes workarounds for serious shortcomings, and provides suggestions for future work.

It is also meant to serve as a HOWTO to walk interested, technically capable people through the end-to-end installation and configuration of a prototype of a secure and private Android device, where access to the network is restricted to an approved list of applications, and all traffic is routed through the Tor network.

It is our hope that this work can be replicated and eventually fully automated, given a good UI, and rolled into a single ROM or ROM addon package for ease of use. Ultimately, there is no reason why this system could not become a full fledged off the shelf product, given proper hardware support and good UI for the more technical bits.

The remainder of this document is divided into the following sections:

  1. Hardware Selection
  2. Installation and Setup
  3. Google Apps Setup
  4. Recommended Software
  5. Device Backup Procedure
  6. Removing the Built-in Microphone
  7. Removing Baseband Remnants
  8. Future Work
  9. Changes Since Initial Posting

Hardware Selection

If you truly wish to secure your mobile device from remote compromise, it is necessary to carefully select your hardware. First and foremost, it is absolutely essential that the carrier's baseband firmware is completely isolated from the rest of the platform. Because your cell phone baseband does not authenticate the network (in part to allow roaming), any random hacker with their own cell network can exploit these backdoors and use them to install malware on your device.

While there are projects underway to determine which handsets actually provide true hardware baseband isolation, at the time of this writing there is very little public information available on this topic. Hence, the only safe option remains a device with no cell network support at all (though cell network connectivity can still be provided by a separate device). For the purposes of this post, the reference device is the WiFi-only version of the 2013 Google Nexus 7 tablet.

For users who wish to retain full mobile access, we recommend obtaining a cell modem device that provides a WiFi access point for data services only. These devices do not have microphones and in some cases do not even have fine-grained GPS units (because they are not able to make emergency calls). They are also available with prepaid plans, for rates around $20-30 USD per month, for about 2GB/month of 4G data. If coverage and reliability is important to you though, you may want to go with a slightly more expensive carrier. In the US, T-Mobile isn't bad, but Verizon is superb.

To increase battery life of your cell connection, you can connect this access point to an external mobile USB battery pack, which typically will provide 36-48 hours of continuous use with a 6000mAh battery.

The total cost of a Wifi-only tablet with cell modem and battery pack is only roughly USD $50 more than the 4G LTE version of the same device.

In this way, you achieve true baseband isolation, with no risk of audio or network surveillance, baseband exploits, or provider backdoors. Effectively, this cell modem is just another untrusted router in a long, long chain of untrustworthy Internet infrastructure.

However, do note though that even if the cell unit does not contain a fine-grained GPS, you still sacrifice location privacy while using it. Over an extended period of time, it will be possible to make inferences about your physical activity, behavior and personal preferences, and your identity, based on cell tower use alone.

Installation and Setup

We will focus on the installation of Cyanogenmod 11 using Team Win Recovery Project, both to give this HOWTO some shelf life, and because Cyanogenmod 11 features full SELinux support (Dear NSA: What happened to you guys? You used to be cool. Well, some of you. Some of the time. Maybe. Or maybe not).

The use of Google Apps and Google Play services is not recommended due to security issues with Google Play. However, we do provide workarounds for mitigating those issues, if Google Play is required for your use case.

Installation and Setup: ROM and Core App Installation

With the 2013 Google Nexus 7 tablet, installation is fairly straight-forward. In fact, it is actually possible to install and use the device before associating it with a Google Account in any way. This is a desirable property, because by default, the otherwise mandatory initial setup process of the stock Google ROM sends your device MAC address directly to Google and links it to your Google account (all without using Tor, of course).

The official Cyanogenmod installation instructions are available online, but with a fresh out of the box device, here are the key steps for installation without activating the default ROM code at all (using Team Win Recovery Project instead of ClockWorkMod).

First, on your desktop/laptop computer (preferably Linux), perform the following:

  1. Download the latest CyanogenMod 11 release (we used cm-11-20140504-SNAPSHOT-M6)
  2. Download the latest Team Win Recovery Project image (we used
  3. Download the F-Droid package (we used 0.66)
  4. Download the Orbot package from F-Droid (we used 13.0.7)
  5. Download the Droidwall package from F-Droid (we used 1.5.7)
  6. Download the Droidwall Firewall Scripts attached to this blogpost
  7. Download the Google Apps for Cyanogenmod 11 (optional)

Because the download integrity for all of these packages is abysmal, here is a signed set of SHA256 hashes I've observed for those packages.

Once you have all of those packages, boot your tablet into fastboot mode by holding the Power button and the Volume Down button during a cold boot. Then, attach it to your desktop/laptop machine with a USB cable and run the following commands from a Linux/UNIX shell:

 apt-get install android-tools-adb android-tools-fastboot
 fastboot devices
 fastboot oem unlock
 fastboot flash recovery openrecovery-twrp-

After the recovery firmware is flashed successfully, use the volume keys to select Recovery and hit the power button to reboot the device (or power it off, and then boot holding Power and Volume Up).

Once Team Win boots, go into Wipe and select Advanced Wipe. Select all checkboxes except for USB-OTG, and slide to wipe. Once the wipe is done, click Format Data. After the format completes, issue these commands from your Linux shell:

 adb server start
 adb push /sdcard/
 adb push /sdcard/ # Optional

After this push process completes, go to the Install menu, and select the Cyanogen zip, and optionally the gapps zip for installation. Then click Reboot, and select System.

After rebooting into your new installation, skip all CyanogenMod and Google setup, disable location reporting, and immediately disable WiFi and turn on Airplane mode.

Then, go into Settings -> About Tablet and scroll to the bottom and click the greyed out Build number 5 times until developer mode is enabled. Then go into Settings -> Developer Options and turn on USB Debugging.

After that, run the following commands from your Linux shell:

 adb install FDroid.apk
 adb install org.torproject.android_86.apk
 adb install com.googlecode.droidwall_157.apk

You will need to approve the ADB connection for the first package, and then they should install normally.

VERY IMPORTANT: Whenever you finish using adb, always remember to disable USB Debugging and restore Root Access to Apps only. While Android 4.2+ ROMs now prompt you to authorize an RSA key fingerprint before allowing a debugging connection (thus mitigating adb exploit tools that bypass screen lock and can install root apps), you still risk additional vulnerability surface by leaving debugging enabled.

Installation and Setup: Initial Configuration

After the base packages are installed, go into the Settings app, and make the following changes:

  1. Wireless & Networks More... =>
    • Temporarily Disable Airplane Mode
    • NFC -> Disable
    • Re-enable Airplane Mode
  2. Location Access -> Off
  3. Security =>
    • PIN screen Lock
    • Allow Unknown Sources (For F-Droid)
  4. Language & Input =>
    • Spell Checker -> Android Spell Checker -> Disable Contact Names
    • Disable Google Voice Typing/Hotword detection
    • Android Keyboard (AOSP) =>
      • Disable AOSP next-word suggestion (do this first!)
      • Auto-correction -> Off
  5. Backup & reset =>
    • Enable Back up my data (just temporarily, for the next step)
    • Uncheck Automatic restore
    • Disable Backup my data
  6. About Tablet -> Cyanogenmod Statistics -> Disable reporting
  7. Developer Options -> Device Hostname -> localhost
  8. SuperUser -> Settings (three dots) -> Notifications -> Notification (not toast)
  9. Privacy -> Privacy Guard =>
    • Enabled by default
    • Settings (three dots) -> Show Built In Apps
    • Enable Privacy Guard for every app with the following exceptions:
      • Calendar
      • Config Updater
      • Google Account Manager (long press)
        • Modify Settings -> Off
        • Wifi Change -> Off
        • Data Change -> Off
      • Google Play Services (long press)
        • Location -> Off
        • Modify Settings -> Off
        • Draw on top -> Off
        • Record Audio -> Off
        • Wifi Change -> Off
      • Google Play Store (long press)
        • Location -> Off
        • Send SMS -> Off
        • Modify Settings -> Off
        • Data change -> Off
      • Google Services Framework (long press)
        • Modify Settings -> Off
        • Wifi Change -> Off
        • Data Change -> Off
      • Trebuchet

Now, it is time to encrypt your tablet. It is important to do this step early, as I have noticed additional apps and configuration tweaks can make this process fail later on.

We will also do this from the shell, in order to set a different password than your screen unlock pin. This is done to mitigate the risk of compromise of this password from shoulder surfers, and to allow the use of a much longer (and non-numeric) password that you would prefer not to type every time you unlock the screen.

To do this, open the Terminal app, and type the following commands:

vdc cryptfs enablecrypto inplace NewMoreSecurePassword

Watch for typos! That command does not ask you to re-type that password for confirmation.

Installation and Setup: Disabling Invasive Apps and Services

Before you configure the Firewall or enable the network, you likely want to disable at least a subset of the following built-in apps and services, by using Settings -> Apps -> All, and then clicking on each app and hitting the Disable button:

  • Face Unlock
  • Google Backup Transport
  • Google Calendar Sync
  • Google One Time Init
  • Google Partner Setup
  • Google Contacts Sync
  • Google Search
  • Hangouts
  • Market Feedback Agent
  • News & Weather
  • One Time Init
  • Picasa Updater
  • Sound Search for Google Play
  • TalkBack

Installation and Setup: Tor and Firewall configuration

Ok, now let's install the firewall and tor support scripts. Go back into Settings -> Developer Options and enable USB Debugging and change Root Access to Apps and ADB. Then, unzip the on your laptop, and run the installation script:


That firewall installation provides several key scripts that provide functionality that is currently impossible to achieve with any app (including Orbot):

  1. It installs a userinit script to block all network access during boot.
  2. It disables "Google Captive Portal Detection", which involves connection attempts to Google servers upon Wifi assocation (these requests are made by the Android Settings UID, which should normally be blocked from the network, unless you are first registering for Google Play).
  3. It contains a Droidwall script that configures Tor transproxy rules to send all of your traffic through Tor. These rules include a fix for a Linux kernel Tor transproxy packet leak issue.
  4. The main Droidwall script also includes an input firewall, to block all inbound connections to the device. It also fixes a Droidwall permissions vulnerability
  5. It installs an optional script to allow the Browser app to bypass Tor for logging into WiFi captive portals.
  6. It installs an optional script to temporarily allow network adb access when you need it (if you are paranoid about USB exploits, which you should be).
  7. It provides an optional script to allow the UDP activity of LinPhone to bypass Tor, to allow ZRTP-encrypted Voice and Video SIP/VoIP calls. SIP account login/registration and call setup/signaling can be done over TCP, and Linphone's TCP activity is still sent through Tor with this script.

Note that with the exception of the userinit network blocking script, installing these scripts does not activate them. You still need to configure Droidwall to use them.

We use Droidwall instead of Orbot or AFWall+ for five reasons:

  1. Droidwall's app-based firewall and Orbot's transproxy are known to conflict and reset one another.
  2. Droidwall does not randomly drop transproxy rules when switching networks (Orbot has had several of these types of bugs).
  3. Unlike AFWall+, Droidwall is able to auto-launch at "boot" (though still not before the network and Android Services come online and make connections).
  4. AFWall+'s "fix" for this startup data leak problem does not work on Cyanogenmod (hence our userinit script instead).
  5. Aside from the permissions issue fixed by our script, AFWall+ provides no additional security fixes over the stock Droidwall.

To make use of the firewall scripts, open up Droidwall and hit the config button (the vertical three dots), go to More -> Set Custom Script. Enter the following:

. /data/local/
#. /data/local/
#. /data/local/
#. /data/local/

NOTE: You must not make any typos in the above. If you mistype any of those files, things may break. Because the script blocks all network at boot, if you make a typo in the torify script, you will be unable to use the Internet at all!

Also notice that these scripts have been installed into a readonly root directory. Because they are run as root, installing them to a world-writable location like /sdcard/ is extremely unwise.

Later, if you want to enable one of network adb, LinPhone UDP, or captive portal login, go back into this window and remove the leading comment ('#') from the appropriate lines (this is obviously one of the many aspects of this prototype that could benefit from real UI).

Then, configure the apps you want to allow to access the network. Note that the only Android system apps that must access the network are:

  • CM Updater
  • Downloads, Media Storage, Download Manager
  • F-Droid

Orbot's network access is handled via the main script. You do not need to enable full network access to Orbot in Droidwall.

The rest of the apps you can enable at your discretion. They will all be routed through Tor automatically.

Once Droidwall is configured, you can click on the Menu (three dots) and click the "Firewall Disabled" button to enable the firewall. Then, you can enable Orbot. Do not grant Orbot superuser access. It still opens the transproxy ports you need without root, and Droidwall is managing installation of the transproxy rules, not Orbot.

You are now ready to enable Wifi and network access on your device. For vulnerability surface reduction, you may want to use the Advanced Options -> Static IP to manually enter an IP address for your device to avoid using dhclient. You do not need a DNS server, and can safely set it to

Google Apps Setup

If you installed the Google Apps zip, you need to do a few things now to set it up, and to further harden your device. If you opted out of Google Apps, you can skip to the next section.

Google Apps Setup: Initializing Google Play

The first time you use Google Play, you will need to enable four apps in Droidwall: "Google Account Manager, Google Play Services...", "Settings, Dev Tools, Fused Location...", "Gmail", and "Google Play" itself.

If you do not have a Google account, your best bet is to find open wifi to create one, as Google will often block accounts created through Tor, even if you use an Android device.

After you log in for the first time, you should be able to disable the "Google Account Manager, Google Play Services...", "Gmail", and the "Settings..." apps in Droidwall, but your authentication tokens in Google Play may expire periodically. If this happens, you should only need to temporarily enable the "Google Account Manager, Google Play Services..." app in Droidwall to obtain new ones.

Google Apps Setup: Mitigating the Google Play Backdoors

If you do choose to use Google Play, you need to be very careful about how you allow it to access the network. In addition to the risks associated with using a proprietary App Store that can send you targeted malware-infected packages based on your Google Account, it has at least two major user experience flaws:

  1. Anyone who is able to gain access to your Google account can silently install root or full permission apps without any user interaction what-so-ever. Once installed, these apps can retroactively clear what little installation notification and UI-based evidence of their existence there was in the first place.
  2. The Android Update Process does not inform the user of changes in permissions of pending update apps that happen to get installed after an Android upgrade.

The first issue can be mitigated by ensuring that Google Play does not have access to the network when not in use, by disabling it in Droidwall. If you do not do this, apps can be installed silently behind your back. Welcome to the Google Experience.

For the second issue, you can install the SecCheck utility, to monitor your apps for changes in permissions during a device upgrade.

Google Apps Setup: Disabling Google Cloud Messaging

If you have installed the Google Apps zip, you have also enabled a feature called Google Cloud Messaging.

The Google Cloud Messaging Service allows apps to register for asynchronous remote push notifications from Google, as well as send outbound messages through Google.

Notification registration and outbound messages are sent via the app's own UID, so using Droidwall to disable network access by an app is enough to prevent outbound data, and notification registration. However, if you ever allow network access to an app, and it does successfully register for notifications, these notifications can be delivered even when the app is once again blocked from accessing the network by Droidwall.

These inbound notifications can be blocked by disabling network access to the "Google Account Manager, Google Play Services, Google Services Framework, Google Contacts Sync" in Droidwall. In fact, the only reason you should ever need to enable network access by this service is if you need to log in to Google Play again if your authentication tokens ever expire.

If you would like to test your ability to control Google Cloud Messaging, there are two apps in the Google Play store than can help with this. GCM Test allows for simple send and receive pings through GCM. Push Notification Tester will allow you to test registration and asynchronous GCM notification.

Recommended Privacy and Auditing Software

Ok, so now that we have locked down our Android device, now for the fun bit: secure communications!

We recommend the following apps from F-Droid:

  1. Xabber
  2. Xabber is a full Java implementation of XMPP, and supports both OTR and Tor. Its UI is a bit more streamlined than Guardian Project's ChatSecure, and it does not make use of any native code components (which are more vulnerable to code execution exploits than pure Java code). Unfortunately, this means it lacks some of ChatSecure's nicer features, such as push-to-talk voice and file transfer.

    Despite better protection against code execution, it does have several insecure default settings. In particular, you want to make the following changes:

    • Notifications -> Message text in Notifications -> Off (notifications can be read by other apps!)
    • Accounts -> Integration into system accounts -> Off
    • Accounts -> Store message history -> Don't Store
    • Security -> Store History -> Off
    • Security -> Check Server Certificate
    • Chat -> Show Typing Notifications -> Off
    • Connection Settings -> Auto-away -> disabled
    • Connection Settings -> Extended away when idle -> Disabled
    • Keep Wifi Awake -> On
    • Prevent sleep Mode -> On
  3. Offline Calendar
  4. Offline Calendar is a hack to allow you to create a fake local Google account that does not sync to Google. This allows you to use the Calendar App without risk of leaking your activities to Google. Note that you must exempt both this app and Calendar from Privacy Guard for it to function properly.

  5. LinPhone
  6. LinPhone is a FOSS SIP client that supports TCP TLS signaling and ZRTP. Note that neither TLS nor ZRTP are enabled by default. You must manually enable them in Settings -> Network -> Transport and Settings -> Network -> Media Encryption. is a free SIP service run by the Guardian Project that supports only TLS and ZRTP, but does not allow outdialing to normal PSTN telephone numbers. While Bitcoin has many privacy issues of its own, the Bitcoin community maintains a couple lists of "trunking" providers that allow you to obtain a PSTN phone number in exchange for Bitcoin payment.

  7. Plumble

    Plumble is a Mumble client that will route voice traffic over Tor, which is useful if you would like to communicate with someone over voice without revealing your IP to them, or your activity to a local network observer. However, unlike Linphone, voice traffic is not end-to-end encrypted, so the Mumble server can listen to your conversations.

  8. K-9 Mail and APG

    K-9 Mail is a POP/IMAP client that supports TLS and integrates well with APG, which will allow you to send and receive GPG-encrypted mail easily. Before using it, you should be aware of two things: It identifies itself in your mail headers, which opens you up to targeted attacks specifically tailored for K-9 Mail and/or Android, and by default it includes the subject of messages in mail notifications (which is bad, because other apps can read notifications). There is a privacy option to disable subject text in notifications, but there is no option to disable the user agent in the mail headers.

  9. OSMAnd~
  10. A free offline mapping tool. While the UI is a little clunky, it does support voice navigation and driving directions, and is a handy, private alternative to Google Maps.

  11. VLC

    The VLC port in F-Droid is a fully capable media player. It can play mp3s and most video formats in use today. It is a handy, private alternative to Google Music and other closed-source players that often report your activity to third party advertisers. VLC does not need network access to function.

  12. Firefox
  13. We do not yet have a port of Tor Browser for Android (though one is underway -- see the Future Work section). Unless you want to use Google Play to get Chrome, Firefox is your best bet for a web browser that receives regular updates (the built in Browser app does not). HTTPS-Everywhere and NoScript are available, at least.

  14. Bitcoin
  15. Bitcoin might not be the most private currency in the world. In fact, you might even say it's the least private currency in the world. But, it is a neat toy.

  16. Launch App Ops

    The Launch App Ops app is a simple shortcut into the hidden application permissions editor in Android. A similar interface is available through Settings -> Privacy -> Privacy Guard, but a direct shortcut to edit permissions is handy. It also displays some additional system apps that Privacy Guard omits.

  17. Permissions

    The Permissions app gives you a view of all Android permissions, and shows you which apps have requested a given permission. This is particularly useful to disable the record audio permission for apps that you don't want to suddenly decide to listen to you. (Interestingly, the Record Audio permission disable feature was broken in all Android ROMs I tested, aside from Cyanogenmod 11. You can test this yourself by revoking the permission from the Sound Recorder app, and verifying that it cannot record.)

  18. CatLog
  19. In addition to being supercute, CatLog is an excellent Android monitoring and debugging tool. It allows you to monitor and record the full set of Android log events, which can be helpful in diagnosing issues with apps.

  20. OS Monitor
  21. OS Monitor is an excellent Android process and connection monitoring app, that can help you watch for CPU usage and connection attempts by your apps.

  22. Intent Intercept
  23. Intent Intercept allows you to inspect and extract Android Intent content without allowing it to get forwarded to an actual app. This is useful for monitoring how apps attempt to communicate with eachother, though be aware it only covers one of the mechanisms of inter-app communication in Android.

Backing up Your Device Without Google

Now that your device is fully configured and installed, you probably want to know how to back it up without sending all of your private information directly to Google. While the Team Win Recovery Project will back up all of your system settings and apps (even if your device is encrypted), it currently does not back up the contents of your virtualized /sdcard. Remembering to do a couple adb pulls of key directories can save you a lot of heartache should you suffer some kind of data loss or hardware failure (or simply drop your tablet on a bridge while in a rush to catch a train).

The script uses adb to pull your Download and Pictures directories from the /sdcard, as well as pulls the entire TWRP backup directory.

Before you use that script, you probably want to delete old TWRP backup folders so as to only pull one backup, to reduce pull time. These live in /sdcard/TWRP/BACKUPS/, which is also known as /storage/emulated/0/TWRP/BACKUPS in the File Manager app.

To use this script over the network without a usb cable, enable both USB Debugging and ADB Over Network in your developer settings. The script does not require you to enable root access from adb, and you should not enable root because it takes quite a while to run a backup, especially if you are using network adb.

Prior to using network adb, you must edit your Droidwall custom scripts to allow it (by removing the '#' in the #. /data/local/ line you entered earlier), and then run the following commands from a non-root Linux shell on your desktop/laptop (the ADB Over Network setting will tell you the IP and port):

killall adb
adb connect ip:5555

VERY IMPORTANT: Don't forget to disable USB Debugging, as well as the Droidwall adb exemption when you are done with the backup!

Removing the Built-in Microphone

If you would really like to ensure that your device cannot listen to you even if it is exploited, it turns out it is very straight-forward to remove the built-in microphone in the Nexus 7. There is only one mic on the 2013 model, and it is located just below the volume buttons (the tiny hole).

To remove it, all you need to do is pop off the the back panel (this can be done with your fingernails, or a tiny screwdriver), and then you can shave the microphone right off that circuit board, and reattach the panel. I have done this to one of my devices, and it was subsequently unable to record audio at all, without otherwise affecting functionality.

You can still use apps that require a microphone by plugging in headphone headset that contains a mic built in (these cost around $20 and you can get them from nearly any consumer electronics store). I have also tested this, and was still able to make a Linphone call from a device with the built in microphone removed, but with an external headset. Note that the 2012 Nexus 7 does not support these combination microphone+headphone jacks (and it has a secondary microphone as well). You must have the 2013 model.

The 2013 Nexus 7 Teardown video can give you an idea of what this looks like before you try it. Again you do not need to fully disassemble the device - you only need to remove the back cover.

Pro-Tip: Before you go too crazy and start ripping out the cameras too, remember that you can cover the cameras with a sticker or tape when not in use. I have found that regular old black electrical tape applies seamlessly, is non-obvious to casual onlookers, and is easy to remove without smudging or gunking up the lenses. Better still, it can be removed and reapplied many times without losing its adhesive.

Removing the Remnants of the Baseband

There is one more semi-hardware mod you may want to make, though.

It turns out that the 2013 Wifi Nexus 7 does actually have a partition that contains a cell network baseband firmware on it, located on the filesystem as the block device /dev/block/platform/msm_sdcc.1/by-name/radio. If you run strings on that block device from the shell, you can see that all manner of CDMA and GSM log messages, comments, and symbols are present in that partition.

According to ADB logs, Cyanogenmod 11 actually does try to bring up a cell network radio at boot on my Wifi-only Nexus 7, but fails due to it being disabled. There is also a strong economic incentive for Asus and Google to make it extremely difficult to activate the baseband even if the hardware is otherwise identical for manufacturing reasons, since they sell the WiFi-only version for $100 less. If it were easy to re-enable the baseband, HOWTOs would exist (which they do not seem to, at least not yet), and they would cut into their LTE device sales.

Even so, since we lack public schematics for the Nexus 7 to verify that cell components are actually missing or hardware-disabled, it may be wise to wipe this radio firmware as well, as defense in depth.

To do this, open the Terminal app, and run:

cd /dev/block/platform/msm_sdcc.1/by-name
dd if=/dev/zero of=./radio

I have wiped that partition while the device was running without any issue, or any additional errors from ADB logs.

Note that an anonymous commenter also suggested it is possible to disable the baseband of a cell-enabled device using a series of Android service disable commands, and by wiping that radio block device. I have not tested this on a device other than the WiFI-only Nexus 7, though, so proceed with caution. If you try those steps on a cell-enabled device, you should archive a copy of your radio firmware first by doing something like the following from that dev directory that contains the radio firmware block device.

dd if=./radio of=/sdcard/radio.img

If anything goes wrong, you can restore that image with:

dd if=/sdcard/radio.img of=./radio

Future Work

In addition to streamlining the contents of this post into a single additional Cyanogenmod installation zip or alternative ROM, the following problems remain unsolved.

Future Work: Better Usability

While arguably very secure, this system is obviously nowhere near usable. Here are some potential improvements to the user interface, based on a brainstorming session I had with another interested developer.

First of all, the AFWall+/Droidwall UI should be changed to be a tri-state: It should allow you to send app traffic over Tor, over your normal internet connection, or block it entirely.

Next, during app installation from either F-Droid or Google Play (this is an Intent another addon app can actually listen for), the user should be given the chance to decide if they would like that app's traffic to be routed over Tor, use the normal Internet connection, or be blocked entirely from accessing the network. Currently, the Droidwall default for new apps is "no network", which is a great default, but it would be nice to ask users what they would like to do during actual app installation.

Moreover, users should also be given a chance to edit the app's permissions upon installation as well, should they desire to do so.

The Google Play situation could also be vastly improved, should Google itself still prove unwilling to improve the situation. Google Play could be wrapped in a launcher app that automatically grants it network access prior to launch, and then disables it upon leaving the window.

A similar UI could be added to LinPhone. Because the actual voice and video transport for LinPhone does not use Tor, it is possible for an adversary to learn your SIP ID or phone number, and then call you just for the purposes of learning your IP. Because we handle call setup over Tor, we can prevent LinPhone from performing any UDP activity, or divulging your IP to the calling party prior to user approval of the call. Ideally, we would also want to inform the user of the fact that incoming calls can be used to obtain information about them, at least prior to accepting their first call from an unknown party.

Future Work: Find Hardware with Actual Isolated Basebands

Related to usability, it would be nice if we could have a serious community effort to audit the baseband isolation properties of existing cell phones, so we all don't have to carry around these ridiculous battery packs and sketch-ass wifi bridges. There is no engineering reason why this prototype could not be just as secure if it were a single piece of hardware. We just need to find the right hardware.

A random commenter claimed that the Galaxy Nexus might actually have exactly the type of baseband isolation we want, but the comment was from memory, and based on software reverse engineering efforts that were not publicly documented. We need to do better than this.

Future Work: Bug Bounty Program

If there is sufficient interest in this prototype, and/or if it gets transformed into a usable addon package or ROM, we may consider running a bug bounty program where we accept donations to a dedicated Bitcoin address, and award the contents of that wallet to anyone who discovers a Tor proxy bypass issue or remote code execution vulnerability in any of the network-enabled apps mentioned in this post (except for the Browser app, which does not receive security updates).

Future Work: Port Tor Browser to Android

The Guardian Project is undertaking a port of Tor Browser to Android as part of their OrFox project. This will greatly improve the privacy of your web browsing experience on the Android device over both Firefox and Chrome. We look forward to helping them in any way we can with this effort.

Future Work: WiFi MAC Address Randomization

It is actually possible to randomize the WiFi MAC address on the Google Nexus 7. The closed-source root app Mac Spoofer is able to modify the device MAC address using Qualcomm-specific methods in such a way that the entire Android OS becomes convinced that this is your actual MAC.

However, doing this requires installation of a root-enabled, closed-source application from the Google Play Store, which we believe is extremely unwise on a device you need to be able to trust. Moreover, this app cannot be autorun on boot, and your MAC address will also reset every time you disable the WiFi interface (which is easy to do accidentally). It also supports using only a single, manually entered MAC address.

Hardware-independent techniques (such as a the Terminal command busybox ifconfig wlan0 hw ether <mac>) appear to interfere with the WiFi management system and prevent it from associating. Moreover, they do not cause the Android system to report the new MAC address, either (visible under Settings -> About Tablet -> Status).

Obviously, an Open Source F-Droid app that properly resets (and automatically randomizes) the MAC every time the WiFi interface is brought up is badly needed.

Future Work: Disable Probes for Configured Wifi Networks

The Android OS currently probes for all of your configured WiFi networks while looking for open wifi to connect to. Configured networks should not be probed for explictly unless activity for their BSSID is seen. The xda-developers forum has a limited fix to change scanning behavior, but users report that it does not disable the active probing behavior for any "hidden" networks that you have configured.

Future Work: Recovery ROM Password Protection

An unlocked recovery ROM is a huge vulnerability surface for Android. While disk encryption protects your applications and data, it does not protect many key system binaries and boot programs. With physical access, it is possible to modify these binaries through your recovery ROM.

The ability to set a password for the Team Win recovery ROM in such a way that a simple "fastboot flash recovery" would overwrite would go a long way to improving device security. At least it would become evident to you if your recovery ROM has been replaced, in this case (due to the absence of the password).

It may also be possible to restore your bootloader lock as an alternative, but then you lose the ability to make backups of your system using Team Win.

Future Work: Disk Encryption via TPM or Clever Hacks

Unfortunately, even disk encryption and a secure recovery firmware is not enough to fully defend against an adversary with an extended period of physical access to your device.

Cold Boot Attacks are still very much a reality against any form of disk encryption, and the best way to eliminate them is through hardware-assisted secure key storage, such as through a TPM chip on the device itself.

It may also be possible to mitigate these attacks by placing key material in SRAM memory locations that will be overwritten as part of the ARM boot process. If these physical memory locations are stable (and for ARM systems that use the SoC SRAM to boot, they will be), rebooting the device to extract key material will always end up overwriting it. Similar ARM CPU-based encryption defenses have also been explored in the research literature.

Future Work: Download and Build Process Integrity

Beyond the download integrity issues mentioned above, better build security is also deeply needed by all of these projects. A Gitian descriptor that is capable of building Cyanogenmod and arbitrary F-Droid packages in a reproducible fashion is one way to go about achieving this property.

Future Work: Removing Binary Blobs

If you read the Cyanogenmod build instructions closely, you can see that it requires extracting the binary blobs from some random phone, and shipping them out. This is the case with most ROMs. In fact, only the Replicant Project seems concerned with this practice, but regrettably they do not support any wifi-only devices. This is rather unfortunate, because no matter what they do with the Android OS on existing cell-enabled devices, they will always be stuck with a closed source, backdoored baseband that has direct access to the microphone, if not the RAM and the entire Android OS.

Kudos to them for finding one of the backdoors though, at least.

Changes Since Initial Posting

  1. Updated firewall scripts to fix Droidwall permissions vulnerability.
  2. Updated Applications List to recommend VLC as a free media player.
  3. Mention the Guardian Project's planned Tor Browser port (called OrFox) as Future Work.
  4. Mention disabling configured WiFi network auto-probing as Future Work
  5. Updated the firewall install script (and the that contains it) to disable "Captive Portal detection" connections to Google upon WiFi association. These connections are made by the Settings service user, which should normally be blocked unless you are Activating Google Play for the first time.
  6. Updated the Executive Summary section to make it clear that our SIP client can actually also make normal phone calls, too.
  7. Document removing the built-in microphone, for the truly paranoid folk out there.
  8. Document removing the remnants of the baseband, or disabling an existing baseband.
  9. Update SHA256SUM of FDroid.apk for 0.63
  10. Remove multiport usage from script (and update
  11. Add pro-tip to the microphone removal section: Don't remove your cameras. Black electrical tape works just fine, and can be removed and reapplied many times without smudges.
  12. Update installation and documentation to use /data/local instead of /etc. CM updates will wipe /etc, of course. Woops. If this happened to you while updating to CM-11-M5, download that new and run again as per the instructions above, and update your Droidwall custom script locations to use /data/local.
  13. Update the Future work section to describe some specific UI improvements.
  14. Update the Future work section to mention that we need to find hardware with actual isolated basebands. Duh. This should have been in there much earlier.
  15. Update the versions for everything
  16. Suggest enabling disk crypto directly from the shell, to avoid SSD leaks of the originally PIN-encrypted device key material.
  17. GMail network access seems to be required for App Store initialization now. Mention this in Google Apps section.
  18. Mention K-9 Mail, APG, and Plumble in the Recommended Apps section.
  19. Update the Firewall instructions to clarify that you need to ensure there are no typos in the scripts, and actually click the Droidwall UI button to enable the Droidwall firewall (otherwise networking will not work at all due to
  20. Disable NFC in Settings config

Tor is released

The Tor 0.2.2 release series is dedicated to the memory of Andreas
Pfitzmann (1958-2010), a pioneer in anonymity and privacy research,
a founder of the PETS community, a leader in our field, a mentor,
and a friend. He left us with these words: "I had the possibility
to contribute to this world that is not as it should be. I hope I
could help in some areas to make the world a better place, and that
I could also encourage other people to be engaged in improving the
world. Please, stay engaged. This world needs you, your love, your
initiative -- now I cannot be part of that anymore."

Tor, the first stable release in the 0.2.2 branch, is finally
ready. More than two years in the making, this release features improved
client performance and hidden service reliability, better compatibility
for Android, correct behavior for bridges that listen on more than
one address, more extensible and flexible directory object handling,
better reporting of network statistics, improved code security, and
many many other features and bugfixes.

Changes in version - 2011-08-27
Major features (client performance):

  • When choosing which cells to relay first, relays now favor circuits
    that have been quiet recently, to provide lower latency for
    low-volume circuits. By default, relays enable or disable this
    feature based on a setting in the consensus. They can override
    this default by using the new "CircuitPriorityHalflife" config
    option. Design and code by Ian Goldberg, Can Tang, and Chris
  • Directory authorities now compute consensus weightings that instruct
    clients how to weight relays flagged as Guard, Exit, Guard+Exit,
    and no flag. Clients use these weightings to distribute network load
    more evenly across these different relay types. The weightings are
    in the consensus so we can change them globally in the future. Extra
    thanks to "outofwords" for finding some nasty security bugs in
    the first implementation of this feature.

Major features (client performance, circuit build timeout):

  • Tor now tracks how long it takes to build client-side circuits
    over time, and adapts its timeout to local network performance.
    Since a circuit that takes a long time to build will also provide
    bad performance, we get significant latency improvements by
    discarding the slowest 20% of circuits. Specifically, Tor creates
    circuits more aggressively than usual until it has enough data
    points for a good timeout estimate. Implements proposal 151.
  • Circuit build timeout constants can be controlled by consensus
    parameters. We set good defaults for these parameters based on
    experimentation on broadband and simulated high-latency links.
  • Circuit build time learning can be disabled via consensus parameter
    or by the client via a LearnCircuitBuildTimeout config option. We
    also automatically disable circuit build time calculation if either
    AuthoritativeDirectory is set, or if we fail to write our state
    file. Implements ticket 1296.

Major features (relays use their capacity better):

  • Set SO_REUSEADDR socket option on all sockets, not just
    listeners. This should help busy exit nodes avoid running out of
    useable ports just because all the ports have been used in the
    near past. Resolves issue 2850.
  • Relays now save observed peak bandwidth throughput rates to their
    state file (along with total usage, which was already saved),
    so that they can determine their correct estimated bandwidth on
    restart. Resolves bug 1863, where Tor relays would reset their
    estimated bandwidth to 0 after restarting.
  • Lower the maximum weighted-fractional-uptime cutoff to 98%. This
    should give us approximately 40-50% more Guard-flagged nodes,
    improving the anonymity the Tor network can provide and also
    decreasing the dropoff in throughput that relays experience when
    they first get the Guard flag.
  • Directory authorities now take changes in router IP address and
    ORPort into account when determining router stability. Previously,
    if a router changed its IP or ORPort, the authorities would not
    treat it as having any downtime for the purposes of stability
    calculation, whereas clients would experience downtime since the
    change would take a while to propagate to them. Resolves issue 1035.
  • New AccelName and AccelDir options add support for dynamic OpenSSL
    hardware crypto acceleration engines.

Major features (relays control their load better):

  • Exit relays now try harder to block exit attempts from unknown
    relays, to make it harder for people to use them as one-hop proxies
    a la tortunnel. Controlled by the refuseunknownexits consensus
    parameter (currently enabled), or you can override it on your
    relay with the RefuseUnknownExits torrc option. Resolves bug 1751;
    based on a variant of proposal 163.
  • Add separate per-conn write limiting to go with the per-conn read
    limiting. We added a global write limit in Tor,
    but never per-conn write limits.
  • New consensus params "bwconnrate" and "bwconnburst" to let us
    rate-limit client connections as they enter the network. It's
    controlled in the consensus so we can turn it on and off for
    experiments. It's starting out off. Based on proposal 163.

Major features (controllers):

  • Export GeoIP information on bridge usage to controllers even if we
    have not yet been running for 24 hours. Now Vidalia bridge operators
    can get more accurate and immediate feedback about their
    contributions to the network.
  • Add an __OwningControllerProcess configuration option and a
    TAKEOWNERSHIP control-port command. Now a Tor controller can ensure
    that when it exits, Tor will shut down. Implements feature 3049.

Major features (directory authorities):

  • Directory authorities now create, vote on, and serve multiple
    parallel formats of directory data as part of their voting process.
    Partially implements Proposal 162: "Publish the consensus in
    multiple flavors".
  • Directory authorities now agree on and publish small summaries
    of router information that clients can use in place of regular
    server descriptors. This transition will allow Tor 0.2.3 clients
    to use far less bandwidth for downloading information about the
    network. Begins the implementation of Proposal 158: "Clients
    download consensus + microdescriptors".
  • The directory voting system is now extensible to use multiple hash
    algorithms for signatures and resource selection. Newer formats
    are signed with SHA256, with a possibility for moving to a better
    hash algorithm in the future.
  • Directory authorities can now vote on arbitary integer values as
    part of the consensus process. This is designed to help set
    network-wide parameters. Implements proposal 167.

Major features and bugfixes (node selection):

  • Revise and reconcile the meaning of the ExitNodes, EntryNodes,
    ExcludeEntryNodes, ExcludeExitNodes, ExcludeNodes, and Strict*Nodes
    options. Previously, we had been ambiguous in describing what
    counted as an "exit" node, and what operations exactly "StrictNodes
    0" would permit. This created confusion when people saw nodes built
    through unexpected circuits, and made it hard to tell real bugs from
    surprises. Now the intended behavior is:

    • "Exit", in the context of ExitNodes and ExcludeExitNodes, means
      a node that delivers user traffic outside the Tor network.
    • "Entry", in the context of EntryNodes, means a node used as the
      first hop of a multihop circuit. It doesn't include direct
      connections to directory servers.
    • "ExcludeNodes" applies to all nodes.
    • "StrictNodes" changes the behavior of ExcludeNodes only. When
      StrictNodes is set, Tor should avoid all nodes listed in
      ExcludeNodes, even when it will make user requests fail. When
      StrictNodes is *not* set, then Tor should follow ExcludeNodes
      whenever it can, except when it must use an excluded node to
      perform self-tests, connect to a hidden service, provide a
      hidden service, fulfill a .exit request, upload directory
      information, or fetch directory information.

    Collectively, the changes to implement the behavior fix bug 1090.

  • If EntryNodes, ExitNodes, ExcludeNodes, or ExcludeExitNodes
    change during a config reload, mark and discard all our origin
    circuits. This fix should address edge cases where we change the
    config options and but then choose a circuit that we created before
    the change.
  • Make EntryNodes config option much more aggressive even when
    StrictNodes is not set. Before it would prepend your requested
    entrynodes to your list of guard nodes, but feel free to use others
    after that. Now it chooses only from your EntryNodes if any of
    those are available, and only falls back to others if a) they're
    all down and b) StrictNodes is not set.
  • Now we refresh your entry guards from EntryNodes at each consensus
    fetch -- rather than just at startup and then they slowly rot as
    the network changes.
  • Add support for the country code "{??}" in torrc options like
    ExcludeNodes, to indicate all routers of unknown country. Closes
    bug 1094.
  • ExcludeNodes now takes precedence over EntryNodes and ExitNodes: if
    a node is listed in both, it's treated as excluded.
  • ExcludeNodes now applies to directory nodes -- as a preference if
    StrictNodes is 0, or an absolute requirement if StrictNodes is 1.
    Don't exclude all the directory authorities and set StrictNodes to 1
    unless you really want your Tor to break.
  • ExcludeNodes and ExcludeExitNodes now override exit enclaving.
  • ExcludeExitNodes now overrides .exit requests.
  • We don't use bridges listed in ExcludeNodes.
  • When StrictNodes is 1:
    • We now apply ExcludeNodes to hidden service introduction points
      and to rendezvous points selected by hidden service users. This
      can make your hidden service less reliable: use it with caution!
    • If we have used ExcludeNodes on ourself, do not try relay
      reachability self-tests.
    • If we have excluded all the directory authorities, we will not
      even try to upload our descriptor if we're a relay.
    • Do not honor .exit requests to an excluded node.
  • When the set of permitted nodes changes, we now remove any mappings
    introduced via TrackExitHosts to now-excluded nodes. Bugfix on
  • We never cannibalize a circuit that had excluded nodes on it, even
    if StrictNodes is 0. Bugfix on
  • Improve log messages related to excluded nodes.

Major features (misc):

  • Numerous changes, bugfixes, and workarounds from Nathan Freitas
    to help Tor build correctly for Android phones.
  • The options SocksPort, ControlPort, and so on now all accept a
    value "auto" that opens a socket on an OS-selected port. A
    new ControlPortWriteToFile option tells Tor to write its
    actual control port or ports to a chosen file. If the option
    ControlPortFileGroupReadable is set, the file is created as
    group-readable. Now users can run two Tor clients on the same
    system without needing to manually mess with parameters. Resolves
    part of ticket 3076.
  • Tor now supports tunneling all of its outgoing connections over
    a SOCKS proxy, using the SOCKS4Proxy and/or SOCKS5Proxy
    configuration options. Code by Christopher Davis.

Code security improvements:

  • Replace all potentially sensitive memory comparison operations
    with versions whose runtime does not depend on the data being
    compared. This will help resist a class of attacks where an
    adversary can use variations in timing information to learn
    sensitive data. Fix for one case of bug 3122. (Safe memcmp
    implementation by Robert Ransom based partially on code by DJB.)
  • Enable Address Space Layout Randomization (ASLR) and Data Execution
    Prevention (DEP) by default on Windows to make it harder for
    attackers to exploit vulnerabilities. Patch from John Brooks.
  • New "--enable-gcc-hardening" ./configure flag (off by default)
    to turn on gcc compile time hardening options. It ensures
    that signed ints have defined behavior (-fwrapv), enables
    -D_FORTIFY_SOURCE=2 (requiring -O2), adds stack smashing protection
    with canaries (-fstack-protector-all), turns on ASLR protection if
    supported by the kernel (-fPIE, -pie), and adds additional security
    related warnings. Verified to work on Mac OS X and Debian Lenny.
  • New "--enable-linker-hardening" ./configure flag (off by default)
    to turn on ELF specific hardening features (relro, now). This does
    not work with Mac OS X or any other non-ELF binary format.
  • Always search the Windows system directory for system DLLs, and
    nowhere else. Bugfix on; fixes bug 1954.
  • New DisableAllSwap option. If set to 1, Tor will attempt to lock all
    current and future memory pages via mlockall(). On supported
    platforms (modern Linux and probably BSD but not Windows or OS X),
    this should effectively disable any and all attempts to page out
    memory. This option requires that you start your Tor as root --
    if you use DisableAllSwap, please consider using the User option
    to properly reduce the privileges of your Tor.

Major bugfixes (crashes):

  • Fix crash bug on platforms where gmtime and localtime can return
    NULL. Windows 7 users were running into this one. Fixes part of bug
    2077. Bugfix on all versions of Tor. Found by boboper.
  • Introduce minimum/maximum values that clients will believe
    from the consensus. Now we'll have a better chance to avoid crashes
    or worse when a consensus param has a weird value.
  • Fix a rare crash bug that could occur when a client was configured
    with a large number of bridges. Fixes bug 2629; bugfix on Bugfix by trac user "shitlei".
  • Do not crash when our configuration file becomes unreadable, for
    example due to a permissions change, between when we start up
    and when a controller calls SAVECONF. Fixes bug 3135; bugfix
    on 0.0.9pre6.
  • If we're in the pathological case where there's no exit bandwidth
    but there is non-exit bandwidth, or no guard bandwidth but there
    is non-guard bandwidth, don't crash during path selection. Bugfix
  • Fix a crash bug when trying to initialize the evdns module in
    Libevent 2. Bugfix on

Major bugfixes (stability):

  • Fix an assert in parsing router descriptors containing IPv6
    addresses. This one took down the directory authorities when
    somebody tried some experimental code. Bugfix on
  • Fix an uncommon assertion failure when running with DNSPort under
    heavy load. Fixes bug 2933; bugfix on
  • Treat an unset $HOME like an empty $HOME rather than triggering an
    assert. Bugfix on 0.0.8pre1; fixes bug 1522.
  • More gracefully handle corrupt state files, removing asserts
    in favor of saving a backup and resetting state.
  • Instead of giving an assertion failure on an internal mismatch
    on estimated freelist size, just log a BUG warning and try later.
    Mitigates but does not fix bug 1125.
  • Fix an assert that got triggered when using the TestingTorNetwork
    configuration option and then issuing a GETINFO config-text control
    command. Fixes bug 2250; bugfix on
  • If the cached cert file is unparseable, warn but don't exit.

Privacy fixes (relays/bridges):

  • Don't list Windows capabilities in relay descriptors. We never made
    use of them, and maybe it's a bad idea to publish them. Bugfix
  • If the Nickname configuration option isn't given, Tor would pick a
    nickname based on the local hostname as the nickname for a relay.
    Because nicknames are not very important in today's Tor and the
    "Unnamed" nickname has been implemented, this is now problematic
    behavior: It leaks information about the hostname without being
    useful at all. Fixes bug 2979; bugfix on, which
    introduced the Unnamed nickname. Reported by tagnaq.
  • Maintain separate TLS contexts and certificates for incoming and
    outgoing connections in bridge relays. Previously we would use the
    same TLS contexts and certs for incoming and outgoing connections.
    Bugfix on; addresses bug 988.
  • Maintain separate identity keys for incoming and outgoing TLS
    contexts in bridge relays. Previously we would use the same
    identity keys for incoming and outgoing TLS contexts. Bugfix on; addresses the other half of bug 988.
  • Make the bridge directory authority refuse to answer directory
    requests for "all descriptors". It used to include bridge
    descriptors in its answer, which was a major information leak.
    Found by "piebeer". Bugfix on

Privacy fixes (clients):

  • When receiving a hidden service descriptor, check that it is for
    the hidden service we wanted. Previously, Tor would store any
    hidden service descriptors that a directory gave it, whether it
    wanted them or not. This wouldn't have let an attacker impersonate
    a hidden service, but it did let directories pre-seed a client
    with descriptors that it didn't want. Bugfix on 0.0.6.
  • Start the process of disabling ".exit" address notation, since it
    can be used for a variety of esoteric application-level attacks
    on users. To reenable it, set "AllowDotExit 1" in your torrc. Fix
    on 0.0.9rc5.
  • Reject attempts at the client side to open connections to private
    IP addresses (like,, and so on) with
    a randomly chosen exit node. Attempts to do so are always
    ill-defined, generally prevented by exit policies, and usually
    in error. This will also help to detect loops in transparent
    proxy configurations. You can disable this feature by setting
    "ClientRejectInternalAddresses 0" in your torrc.
  • Log a notice when we get a new control connection. Now it's easier
    for security-conscious users to recognize when a local application
    is knocking on their controller door. Suggested by bug 1196.

Privacy fixes (newnym):

  • Avoid linkability based on cached hidden service descriptors: forget
    all hidden service descriptors cached as a client when processing a
    SIGNAL NEWNYM command. Fixes bug 3000; bugfix on 0.0.6.
  • On SIGHUP, do not clear out all TrackHostExits mappings, client
    DNS cache entries, and virtual address mappings: that's what
    NEWNYM is for. Fixes bug 1345; bugfix on
  • Don't attach new streams to old rendezvous circuits after SIGNAL
    NEWNYM. Previously, we would keep using an existing rendezvous
    circuit if it remained open (i.e. if it were kept open by a
    long-lived stream, or if a new stream were attached to it before
    Tor could notice that it was old and no longer in use). Bugfix on; fixes bug 3375.

Major bugfixes (relay bandwidth accounting):

  • Fix a bug that could break accounting on 64-bit systems with large
    time_t values, making them hibernate for impossibly long intervals.
    Fixes bug 2146. Bugfix on 0.0.9pre6; fix by boboper.
  • Fix a bug in bandwidth accounting that could make us use twice
    the intended bandwidth when our interval start changes due to
    daylight saving time. Now we tolerate skew in stored vs computed
    interval starts: if the start of the period changes by no more than
    50% of the period's duration, we remember bytes that we transferred
    in the old period. Fixes bug 1511; bugfix on 0.0.9pre5.

Major bugfixes (bridges):

  • Bridges now use "reject *:*" as their default exit policy. Bugfix
    on Fixes bug 1113.
  • If you configure your bridge with a known identity fingerprint,
    and the bridge authority is unreachable (as it is in at least
    one country now), fall back to directly requesting the descriptor
    from the bridge. Finishes the feature started in;
    closes bug 1138.
  • Fix a bug where bridge users who configure the non-canonical
    address of a bridge automatically switch to its canonical
    address. If a bridge listens at more than one address, it
    should be able to advertise those addresses independently and
    any non-blocked addresses should continue to work. Bugfix on Tor Fixes bug 2510.
  • If you configure Tor to use bridge A, and then quit and
    configure Tor to use bridge B instead (or if you change Tor
    to use bridge B via the controller), it would happily continue
    to use bridge A if it's still reachable. While this behavior is
    a feature if your goal is connectivity, in some scenarios it's a
    dangerous bug. Bugfix on Tor; fixes bug 2511.
  • When the controller configures a new bridge, don't wait 10 to 60
    seconds before trying to fetch its descriptor. Bugfix on; fixes bug 3198 (suggested by 2355).

Major bugfixes (directory authorities):

  • Many relays have been falling out of the consensus lately because
    not enough authorities know about their descriptor for them to get
    a majority of votes. When we deprecated the v2 directory protocol,
    we got rid of the only way that v3 authorities can hear from each
    other about other descriptors. Now authorities examine every v3
    vote for new descriptors, and fetch them from that authority. Bugfix
  • Authorities could be tricked into giving out the Exit flag to relays
    that didn't allow exiting to any ports. This bug could screw
    with load balancing and stats. Bugfix on; fixes bug
    1238. Bug discovered by Martin Kowalczyk.
  • If all authorities restart at once right before a consensus vote,
    nobody will vote about "Running", and clients will get a consensus
    with no usable relays. Instead, authorities refuse to build a
    consensus if this happens. Bugfix on; fixes bug 1066.

Major bugfixes (stream-level fairness):

  • When receiving a circuit-level SENDME for a blocked circuit, try
    to package cells fairly from all the streams that had previously
    been blocked on that circuit. Previously, we had started with the
    oldest stream, and allowed each stream to potentially exhaust
    the circuit's package window. This gave older streams on any
    given circuit priority over newer ones. Fixes bug 1937. Detected
    originally by Camilo Viecco. This bug was introduced before the
    first Tor release, in svn commit r152: it is the new winner of
    the longest-lived bug prize.
  • Fix a stream fairness bug that would cause newer streams on a given
    circuit to get preference when reading bytes from the origin or
    destination. Fixes bug 2210. Fix by Mashael AlSabah. This bug was
    introduced before the first Tor release, in svn revision r152.
  • When the exit relay got a circuit-level sendme cell, it started
    reading on the exit streams, even if had 500 cells queued in the
    circuit queue already, so the circuit queue just grew and grew in
    some cases. We fix this by not re-enabling reading on receipt of a
    sendme cell when the cell queue is blocked. Fixes bug 1653. Bugfix
    on Detected by Mashael AlSabah. Original patch by
  • Newly created streams were allowed to read cells onto circuits,
    even if the circuit's cell queue was blocked and waiting to drain.
    This created potential unfairness, as older streams would be
    blocked, but newer streams would gladly fill the queue completely.
    We add code to detect this situation and prevent any stream from
    getting more than one free cell. Bugfix on Partially
    fixes bug 1298.

Major bugfixes (hidden services):

  • Apply circuit timeouts to opened hidden-service-related circuits
    based on the correct start time. Previously, we would apply the
    circuit build timeout based on time since the circuit's creation;
    it was supposed to be applied based on time since the circuit
    entered its current state. Bugfix on 0.0.6; fixes part of bug 1297.
  • Improve hidden service robustness: When we find that we have
    extended a hidden service's introduction circuit to a relay not
    listed as an introduction point in the HS descriptor we currently
    have, retry with an introduction point from the current
    descriptor. Previously we would just give up. Fixes bugs 1024 and
    1930; bugfix on
  • Directory authorities now use data collected from their own
    uptime observations when choosing whether to assign the HSDir flag
    to relays, instead of trusting the uptime value the relay reports in
    its descriptor. This change helps prevent an attack where a small
    set of nodes with frequently-changing identity keys can blackhole
    a hidden service. (Only authorities need upgrade; others will be
    fine once they do.) Bugfix on; fixes bug 2709.
  • Stop assigning the HSDir flag to relays that disable their
    DirPort (and thus will refuse to answer directory requests). This
    fix should dramatically improve the reachability of hidden services:
    hidden services and hidden service clients pick six HSDir relays
    to store and retrieve the hidden service descriptor, and currently
    about half of the HSDir relays will refuse to work. Bugfix on; fixes part of bug 1693.

Major bugfixes (misc):

  • Clients now stop trying to use an exit node associated with a given
    destination by TrackHostExits if they fail to reach that exit node.
    Fixes bug 2999. Bugfix on
  • Fix a regression that caused Tor to rebind its ports if it receives
    SIGHUP while hibernating. Bugfix in; closes bug 919.

  • Remove an extra pair of quotation marks around the error
    message in control-port STATUS_GENERAL BUG events. Bugfix on; fixes bug 3732.

Minor features (relays):

  • Ensure that no empty [dirreq-](read|write)-history lines are added
    to an extrainfo document. Implements ticket 2497.
  • When bandwidth accounting is enabled, be more generous with how
    much bandwidth we'll use up before entering "soft hibernation".
    Previously, we'd refuse new connections and circuits once we'd
    used up 95% of our allotment. Now, we use up 95% of our allotment,
    AND make sure that we have no more than 500MB (or 3 hours of
    expected traffic, whichever is lower) remaining before we enter
    soft hibernation.
  • Relays now log the reason for publishing a new relay descriptor,
    so we have a better chance of hunting down instances of bug 1810.
    Resolves ticket 3252.
  • Log a little more clearly about the times at which we're no longer
    accepting new connections (e.g. due to hibernating). Resolves
    bug 2181.
  • When AllowSingleHopExits is set, print a warning to explain to the
    relay operator why most clients are avoiding her relay.
  • Send END_STREAM_REASON_NOROUTE in response to EHOSTUNREACH errors.
    Clients before didn't handle NOROUTE correctly, but such
    clients are already deprecated because of security bugs.

Minor features (network statistics):

  • Directory mirrors that set "DirReqStatistics 1" write statistics
    about directory requests to disk every 24 hours. As compared to the
    "--enable-geoip-stats" ./configure flag in 0.2.1.x, there are a few
    improvements: 1) stats are written to disk exactly every 24 hours;
    2) estimated shares of v2 and v3 requests are determined as mean
    values, not at the end of a measurement period; 3) unresolved
    requests are listed with country code '??'; 4) directories also
    measure download times.
  • Exit nodes that set "ExitPortStatistics 1" write statistics on the
    number of exit streams and transferred bytes per port to disk every
    24 hours.
  • Relays that set "CellStatistics 1" write statistics on how long
    cells spend in their circuit queues to disk every 24 hours.
  • Entry nodes that set "EntryStatistics 1" write statistics on the
    rough number and origins of connecting clients to disk every 24
  • Relays that write any of the above statistics to disk and set
    "ExtraInfoStatistics 1" include the past 24 hours of statistics in
    their extra-info documents. Implements proposal 166.

Minor features (GeoIP and statistics):

  • Provide a log message stating which geoip file we're parsing
    instead of just stating that we're parsing the geoip file.
    Implements ticket 2432.
  • Make sure every relay writes a state file at least every 12 hours.
    Previously, a relay could go for weeks without writing its state
    file, and on a crash could lose its bandwidth history, capacity
    estimates, client country statistics, and so on. Addresses bug 3012.
  • Relays report the number of bytes spent on answering directory
    requests in extra-info descriptors similar to {read,write}-history.
    Implements enhancement 1790.
  • Report only the top 10 ports in exit-port stats in order not to
    exceed the maximum extra-info descriptor length of 50 KB. Implements
    task 2196.
  • If writing the state file to disk fails, wait up to an hour before
    retrying again, rather than trying again each second. Fixes bug
    2346; bugfix on Tor
  • Delay geoip stats collection by bridges for 6 hours, not 2 hours,
    when we switch from being a public relay to a bridge. Otherwise
    there will still be clients that see the relay in their consensus,
    and the stats will end up wrong. Bugfix on; fixes
    bug 932.
  • Update to the August 2 2011 Maxmind GeoLite Country database.

Minor features (clients):

  • When expiring circuits, use microsecond timers rather than
    one-second timers. This can avoid an unpleasant situation where a
    circuit is launched near the end of one second and expired right
    near the beginning of the next, and prevent fluctuations in circuit
    timeout values.
  • If we've configured EntryNodes and our network goes away and/or all
    our entrynodes get marked down, optimistically retry them all when
    a new socks application request appears. Fixes bug 1882.
  • Always perform router selections using weighted relay bandwidth,
    even if we don't need a high capacity circuit at the time. Non-fast
    circuits now only differ from fast ones in that they can use relays
    not marked with the Fast flag. This "feature" could turn out to
    be a horrible bug; we should investigate more before it goes into
    a stable release.
  • When we run out of directory information such that we can't build
    circuits, but then get enough that we can build circuits, log when
    we actually construct a circuit, so the user has a better chance of
    knowing what's going on. Fixes bug 1362.
  • Log SSL state transitions at debug level during handshake, and
    include SSL states in error messages. This may help debug future
    SSL handshake issues.

Minor features (directory authorities):

  • When a router changes IP address or port, authorities now launch
    a new reachability test for it. Implements ticket 1899.
  • Directory authorities now reject relays running any versions of
    Tor between and inclusive; they have
    known bugs that keep RELAY_EARLY cells from working on rendezvous
    circuits. Followup to fix for bug 2081.
  • Directory authorities now reject relays running any version of Tor
    older than That version is the earliest that fetches
    current directory information correctly. Fixes bug 2156.
  • Directory authorities now do an immediate reachability check as soon
    as they hear about a new relay. This change should slightly reduce
    the time between setting up a relay and getting listed as running
    in the consensus. It should also improve the time between setting
    up a bridge and seeing use by bridge users.
  • Directory authorities no longer launch a TLS connection to every
    relay as they startup. Now that we have 2k+ descriptors cached,
    the resulting network hiccup is becoming a burden. Besides,
    authorities already avoid voting about Running for the first half
    hour of their uptime.
  • Directory authorities now log the source of a rejected POSTed v3
    networkstatus vote, so we can track failures better.
  • Backport code from 0.2.3.x that allows directory authorities to
    clean their microdescriptor caches. Needed to resolve bug 2230.

Minor features (hidden services):

  • Use computed circuit-build timeouts to decide when to launch
    parallel introduction circuits for hidden services. (Previously,
    we would retry after 15 seconds.)
  • Don't allow v0 hidden service authorities to act as clients.
    Required by fix for bug 3000.
  • Ignore SIGNAL NEWNYM commands on relay-only Tor instances. Required
    by fix for bug 3000.
  • Make hidden services work better in private Tor networks by not
    requiring any uptime to join the hidden service descriptor
    DHT. Implements ticket 2088.
  • Log (at info level) when purging pieces of hidden-service-client
    state because of SIGNAL NEWNYM.

Minor features (controller interface):

  • New "GETINFO net/listeners/(type)" controller command to return
    a list of addresses and ports that are bound for listeners for a
    given connection type. This is useful when the user has configured
    "SocksPort auto" and the controller needs to know which port got
    chosen. Resolves another part of ticket 3076.
  • Have the controller interface give a more useful message than
    "Internal Error" in response to failed GETINFO requests.
  • Add a TIMEOUT_RATE keyword to the BUILDTIMEOUT_SET control port
    event, to give information on the current rate of circuit timeouts
    over our stored history.
  • The 'EXTENDCIRCUIT' control port command can now be used with
    a circ id of 0 and no path. This feature will cause Tor to build
    a new 'fast' general purpose circuit using its own path selection
  • Added a BUILDTIMEOUT_SET controller event to describe changes
    to the circuit build timeout.
  • New controller command "getinfo config-text". It returns the
    contents that Tor would write if you send it a SAVECONF command,
    so the controller can write the file to disk itself.

Minor features (controller protocol):

  • Add a new ControlSocketsGroupWritable configuration option: when
    it is turned on, ControlSockets are group-writeable by the default
    group of the current user. Patch by Jérémy Bobbio; implements
    ticket 2972.
  • Tor now refuses to create a ControlSocket in a directory that is
    world-readable (or group-readable if ControlSocketsGroupWritable
    is 0). This is necessary because some operating systems do not
    enforce permissions on an AF_UNIX sockets. Permissions on the
    directory holding the socket, however, seems to work everywhere.
  • Warn when CookieAuthFileGroupReadable is set but CookieAuthFile is
    not. This would lead to a cookie that is still not group readable.
    Closes bug 1843. Suggested by katmagic.
  • Future-proof the controller protocol a bit by ignoring keyword
    arguments we do not recognize.

Minor features (more useful logging):

  • Revise most log messages that refer to nodes by nickname to
    instead use the "$key=nickname at address" format. This should be
    more useful, especially since nicknames are less and less likely
    to be unique. Resolves ticket 3045.
  • When an HTTPS proxy reports "403 Forbidden", we now explain
    what it means rather than calling it an unexpected status code.
    Closes bug 2503. Patch from Michael Yakubovich.
  • Rate-limit a warning about failures to download v2 networkstatus
    documents. Resolves part of bug 1352.
  • Rate-limit the "your application is giving Tor only an IP address"
    warning. Addresses bug 2000; bugfix on 0.0.8pre2.
  • Rate-limit "Failed to hand off onionskin" warnings.
  • When logging a rate-limited warning, we now mention how many messages
    got suppressed since the last warning.
  • Make the formerly ugly "2 unknown, 7 missing key, 0 good, 0 bad,
    2 no signature, 4 required" messages about consensus signatures
    easier to read, and make sure they get logged at the same severity
    as the messages explaining which keys are which. Fixes bug 1290.
  • Don't warn when we have a consensus that we can't verify because
    of missing certificates, unless those certificates are ones
    that we have been trying and failing to download. Fixes bug 1145.

Minor features (log domains):

  • Add documentation for configuring logging at different severities in
    different log domains. We've had this feature since,
    but for some reason it never made it into the manpage. Fixes
    bug 2215.
  • Make it simpler to specify "All log domains except for A and B".
    Previously you needed to say "[*,~A,~B]". Now you can just say
  • Add a "LogMessageDomains 1" option to include the domains of log
    messages along with the messages. Without this, there's no way
    to use log domains without reading the source or doing a lot
    of guessing.
  • Add a new "Handshake" log domain for activities that happen
    during the TLS handshake.

Minor features (build process):

  • Make compilation with clang possible when using
    "--enable-gcc-warnings" by removing two warning options that clang
    hasn't implemented yet and by fixing a few warnings. Resolves
    ticket 2696.
  • Detect platforms that brokenly use a signed size_t, and refuse to
    build there. Found and analyzed by doorss and rransom.
  • Fix a bunch of compile warnings revealed by mingw with gcc 4.5.
    Resolves bug 2314.
  • Add support for statically linking zlib by specifying
    "--enable-static-zlib", to go with our support for statically
    linking openssl and libevent. Resolves bug 1358.
  • Instead of adding the svn revision to the Tor version string, report
    the git commit (when we're building from a git checkout).
  • Rename the "log.h" header to "torlog.h" so as to conflict with fewer
    system headers.
  • New --digests command-line switch to output the digests of the
    source files Tor was built with.
  • Generate our manpage and HTML documentation using Asciidoc. This
    change should make it easier to maintain the documentation, and
    produce nicer HTML. The build process fails if asciidoc cannot
    be found and building with asciidoc isn't disabled (via the
    "--disable-asciidoc" argument to ./configure. Skipping the manpage
    speeds up the build considerably.

Minor features (options / torrc):

  • Warn when the same option is provided more than once in a torrc
    file, on the command line, or in a single SETCONF statement, and
    the option is one that only accepts a single line. Closes bug 1384.
  • Warn when the user configures two HiddenServiceDir lines that point
    to the same directory. Bugfix on 0.0.6 (the version introducing
    HiddenServiceDir); fixes bug 3289.
  • Add new "perconnbwrate" and "perconnbwburst" consensus params to
    do individual connection-level rate limiting of clients. The torrc
    config options with the same names trump the consensus params, if
    both are present. Replaces the old "bwconnrate" and "bwconnburst"
    consensus params which were broken from through Closes bug 1947.
  • New config option "WarnUnsafeSocks 0" disables the warning that
    occurs whenever Tor receives a socks handshake using a version of
    the socks protocol that can only provide an IP address (rather
    than a hostname). Setups that do DNS locally over Tor are fine,
    and we shouldn't spam the logs in that case.
  • New config option "CircuitStreamTimeout" to override our internal
    timeout schedule for how many seconds until we detach a stream from
    a circuit and try a new circuit. If your network is particularly
    slow, you might want to set this to a number like 60.
  • New options for SafeLogging to allow scrubbing only log messages
    generated while acting as a relay. Specify "SafeLogging relay" if
    you want to ensure that only messages known to originate from
    client use of the Tor process will be logged unsafely.
  • Time and memory units in the configuration file can now be set to
    fractional units. For example, "2.5 GB" is now a valid value for
  • Support line continuations in the torrc config file. If a line
    ends with a single backslash character, the newline is ignored, and
    the configuration value is treated as continuing on the next line.
    Resolves bug 1929.

Minor features (unit tests):

  • Revise our unit tests to use the "tinytest" framework, so we
    can run tests in their own processes, have smarter setup/teardown
    code, and so on. The unit test code has moved to its own
    subdirectory, and has been split into multiple modules.
  • Add a unit test for cross-platform directory-listing code.
  • Add some forgotten return value checks during unit tests. Found
    by coverity.
  • Use GetTempDir to find the proper temporary directory location on
    Windows when generating temporary files for the unit tests. Patch
    by Gisle Vanem.

Minor features (misc):

  • The "torify" script now uses torsocks where available.
  • Make Libevent log messages get delivered to controllers later,
    and not from inside the Libevent log handler. This prevents unsafe
    reentrant Libevent calls while still letting the log messages
    get through.
  • Certain Tor clients (such as those behind may
    want to fetch the consensus in an extra early manner. To enable this
    a user may now set FetchDirInfoExtraEarly to 1. This also depends on
    setting FetchDirInfoEarly to 1. Previous behavior will stay the same
    as only certain clients who must have this information sooner should
    set this option.
  • Expand homedirs passed to tor-checkkey. This should silence a
    coverity complaint about passing a user-supplied string into
    open() without checking it.
  • Make sure to disable DirPort if running as a bridge. DirPorts aren't
    used on bridges, and it makes bridge scanning somewhat easier.
  • Create the /var/run/tor directory on startup on OpenSUSE if it is
    not already created. Patch from Andreas Stieger. Fixes bug 2573.

Minor bugfixes (relays):

  • When a relay decides that its DNS is too broken for it to serve
    as an exit server, it advertised itself as a non-exit, but
    continued to act as an exit. This could create accidental
    partitioning opportunities for users. Instead, if a relay is
    going to advertise reject *:* as its exit policy, it should
    really act with exit policy "reject *:*". Fixes bug 2366.
    Bugfix on Tor Bugfix by user "postman" on trac.
  • Publish a router descriptor even if generating an extra-info
    descriptor fails. Previously we would not publish a router
    descriptor without an extra-info descriptor; this can cause fast
    exit relays collecting exit-port statistics to drop from the
    consensus. Bugfix on; fixes bug 2195.
  • When we're trying to guess whether we know our IP address as
    a relay, we would log various ways that we failed to guess
    our address, but never log that we ended up guessing it
    successfully. Now add a log line to help confused and anxious
    relay operators. Bugfix on; fixes bug 1534.
  • For bandwidth accounting, calculate our expected bandwidth rate
    based on the time during which we were active and not in
    soft-hibernation during the last interval. Previously, we were
    also considering the time spent in soft-hibernation. If this
    was a long time, we would wind up underestimating our bandwidth
    by a lot, and skewing our wakeup time towards the start of the
    accounting interval. Fixes bug 1789. Bugfix on 0.0.9pre5.
  • Demote a confusing TLS warning that relay operators might get when
    someone tries to talk to their ORPort. It is not the operator's
    fault, nor can they do anything about it. Fixes bug 1364; bugfix
  • Change "Application request when we're believed to be offline."
    notice to "Application request when we haven't used client
    functionality lately.", to clarify that it's not an error. Bugfix
    on; fixes bug 1222.

Minor bugfixes (bridges):

  • When a client starts or stops using bridges, never use a circuit
    that was built before the configuration change. This behavior could
    put at risk a user who uses bridges to ensure that her traffic
    only goes to the chosen addresses. Bugfix on; fixes
    bug 3200.
  • Do not reset the bridge descriptor download status every time we
    re-parse our configuration or get a configuration change. Fixes
    bug 3019; bugfix on
  • Users couldn't configure a regular relay to be their bridge. It
    didn't work because when Tor fetched the bridge descriptor, it found
    that it already had it, and didn't realize that the purpose of the
    descriptor had changed. Now we replace routers with a purpose other
    than bridge with bridge descriptors when fetching them. Bugfix on Fixes bug 1776.
  • In the special case where you configure a public exit relay as your
    bridge, Tor would be willing to use that exit relay as the last
    hop in your circuit as well. Now we fail that circuit instead.
    Bugfix on Fixes bug 2403. Reported by "piebeer".

Minor bugfixes (clients):

  • We now ask the other side of a stream (the client or the exit)
    for more data on that stream when the amount of queued data on
    that stream dips low enough. Previously, we wouldn't ask the
    other side for more data until either it sent us more data (which
    it wasn't supposed to do if it had exhausted its window!) or we
    had completely flushed all our queued data. This flow control fix
    should improve throughput. Fixes bug 2756; bugfix on the earliest
    released versions of Tor (svn commit r152).
  • When a client finds that an origin circuit has run out of 16-bit
    stream IDs, we now mark it as unusable for new streams. Previously,
    we would try to close the entire circuit. Bugfix on 0.0.6.
  • Make it explicit that we don't cannibalize one-hop circuits. This
    happens in the wild, but doesn't turn out to be a problem because
    we fortunately don't use those circuits. Many thanks to outofwords
    for the initial analysis and to swissknife who confirmed that
    two-hop circuits are actually created.
  • Resolve an edge case in path weighting that could make us misweight
    our relay selection. Fixes bug 1203; bugfix on 0.0.8rc1.
  • Make the DNSPort option work with libevent 2.x. Don't alter the
    behaviour for libevent 1.x. Fixes bug 1143. Found by SwissTorExit.

Minor bugfixes (directory authorities):

  • Make directory authorities more accurate at recording when
    relays that have failed several reachability tests became
    unreachable, so we can provide more accuracy at assigning Stable,
    Guard, HSDir, etc flags. Bugfix on Resolves bug 2716.
  • Directory authorities are now more robust to hops back in time
    when calculating router stability. Previously, if a run of uptime
    or downtime appeared to be negative, the calculation could give
    incorrect results. Bugfix on; noticed when fixing
    bug 1035.
  • Directory authorities will now attempt to download consensuses
    if their own efforts to make a live consensus have failed. This
    change means authorities that restart will fetch a valid
    consensus, and it means authorities that didn't agree with the
    current consensus will still fetch and serve it if it has enough
    signatures. Bugfix on; fixes bug 1300.
  • Never vote for a server as "Running" if we have a descriptor for
    it claiming to be hibernating, and that descriptor was published
    more recently than our last contact with the server. Bugfix on; fixes bug 911.
  • Directory authorities no longer change their opinion of, or vote on,
    whether a router is Running, unless they have themselves been
    online long enough to have some idea. Bugfix on
    Fixes bug 1023.

Minor bugfixes (hidden services):

  • Log malformed requests for rendezvous descriptors as protocol
    warnings, not warnings. Also, use a more informative log message
    in case someone sees it at log level warning without prior
    info-level messages. Fixes bug 2748; bugfix on
  • Accept hidden service descriptors if we think we might be a hidden
    service directory, regardless of what our consensus says. This
    helps robustness, since clients and hidden services can sometimes
    have a more up-to-date view of the network consensus than we do,
    and if they think that the directory authorities list us a HSDir,
    we might actually be one. Related to bug 2732; bugfix on
  • Correct the warning displayed when a rendezvous descriptor exceeds
    the maximum size. Fixes bug 2750; bugfix on Found by
    John Brooks.
  • Clients and hidden services now use HSDir-flagged relays for hidden
    service descriptor downloads and uploads even if the relays have no
    DirPort set and the client has disabled TunnelDirConns. This will
    eventually allow us to give the HSDir flag to relays with no
    DirPort. Fixes bug 2722; bugfix on
  • Only limit the lengths of single HS descriptors, even when multiple
    HS descriptors are published to an HSDir relay in a single POST
    operation. Fixes bug 2948; bugfix on Found by hsdir.

Minor bugfixes (controllers):

  • Allow GETINFO fingerprint to return a fingerprint even when
    we have not yet built a router descriptor. Fixes bug 3577;
    bugfix on
  • Send a SUCCEEDED stream event to the controller when a reverse
    resolve succeeded. Fixes bug 3536; bugfix on 0.0.8pre1. Issue
    discovered by katmagic.
  • Remove a trailing asterisk from "exit-policy/default" in the
    output of the control port command "GETINFO info/names". Bugfix
  • Make the SIGNAL DUMP controller command work on FreeBSD. Fixes bug
    2917. Bugfix on
  • When we restart our relay, we might get a successful connection
    from the outside before we've started our reachability tests,
    triggering a warning: "ORPort found reachable, but I have no
    routerinfo yet. Failing to inform controller of success." This
    bug was harmless unless Tor is running under a controller
    like Vidalia, in which case the controller would never get a
    REACHABILITY_SUCCEEDED status event. Bugfix on;
    fixes bug 1172.
  • When a controller changes TrackHostExits, remove mappings for
    hosts that should no longer have their exits tracked. Bugfix on
  • When a controller changes VirtualAddrNetwork, remove any mappings
    for hosts that were automapped to the old network. Bugfix on
  • When a controller changes one of the AutomapHosts* options, remove
    any mappings for hosts that should no longer be automapped. Bugfix
  • Fix an off-by-one error in calculating some controller command
    argument lengths. Fortunately, this mistake is harmless since
    the controller code does redundant NUL termination too. Found by
    boboper. Bugfix on
  • Fix a bug in the controller interface where "GETINFO ns/asdaskljkl"
    would return "551 Internal error" rather than "552 Unrecognized key
    ns/asdaskljkl". Bugfix on
  • Don't spam the controller with events when we have no file
    descriptors available. Bugfix on (Rate-limiting
    for log messages was already solved from bug 748.)
  • Emit a GUARD DROPPED controller event for a case we missed.
  • Ensure DNS requests launched by "RESOLVE" commands from the
    controller respect the __LeaveStreamsUnattached setconf options. The
    same goes for requests launched via DNSPort or transparent
    proxying. Bugfix on; fixes bug 1525.

Minor bugfixes (config options):

  • Tor used to limit HttpProxyAuthenticator values to 48 characters.
    Change the limit to 512 characters by removing base64 newlines.
    Fixes bug 2752. Fix by Michael Yakubovich.
  • Complain if PublishServerDescriptor is given multiple arguments that
    include 0 or 1. This configuration will be rejected in the future.
    Bugfix on; closes bug 1107.
  • Disallow BridgeRelay 1 and ORPort 0 at once in the configuration.
    Bugfix on; closes bug 928.

Minor bugfixes (log subsystem fixes):

  • When unable to format an address as a string, report its value
    as "???" rather than reusing the last formatted address. Bugfix
  • Be more consistent in our treatment of file system paths. "~" should
    get expanded to the user's home directory in the Log config option.
    Fixes bug 2971; bugfix on, which introduced the
    feature for the -f and --DataDirectory options.

Minor bugfixes (memory management):

  • Don't stack-allocate the list of supplementary GIDs when we're
    about to log them. Stack-allocating NGROUPS_MAX gid_t elements
    could take up to 256K, which is way too much stack. Found by
    Coverity; CID #450. Bugfix on
  • Save a couple bytes in memory allocation every time we escape
    certain characters in a string. Patch from Florian Zumbiehl.

Minor bugfixes (protocol correctness):

  • When checking for 1024-bit keys, check for 1024 bits, not 128
    bytes. This allows Tor to correctly discard keys of length 1017
    through 1023. Bugfix on 0.0.9pre5.
  • Require that introduction point keys and onion handshake keys
    have a public exponent of 65537. Starts to fix bug 3207; bugfix
  • Handle SOCKS messages longer than 128 bytes long correctly, rather
    than waiting forever for them to finish. Fixes bug 2330; bugfix
    on Found by doorss.
  • Never relay a cell for a circuit we have already destroyed.
    Between marking a circuit as closeable and finally closing it,
    it may have been possible for a few queued cells to get relayed,
    even though they would have been immediately dropped by the next
    OR in the circuit. Fixes bug 1184; bugfix on
  • Never queue a cell for a circuit that's already been marked
    for close.
  • Fix a spec conformance issue: the network-status-version token
    must be the first token in a v3 consensus or vote. Discovered by
    "parakeep". Bugfix on
  • A networkstatus vote must contain exactly one signature. Spec
    conformance issue. Bugfix on
  • When asked about a DNS record type we don't support via a
    client DNSPort, reply with NOTIMPL rather than an empty
    reply. Patch by intrigeri. Fixes bug 3369; bugfix on 2.0.1-alpha.
  • Make more fields in the controller protocol case-insensitive, since
    control-spec.txt said they were.

Minor bugfixes (log messages):

  • Fix a log message that said "bits" while displaying a value in
    bytes. Found by wanoskarnet. Fixes bug 3318; bugfix on
  • Downgrade "no current certificates known for authority" message from
    Notice to Info. Fixes bug 2899; bugfix on
  • Correctly describe errors that occur when generating a TLS object.
    Previously we would attribute them to a failure while generating a
    TLS context. Patch by Robert Ransom. Bugfix on; fixes
    bug 1994.
  • Fix an instance where a Tor directory mirror might accidentally
    log the IP address of a misbehaving Tor client. Bugfix on
  • Stop logging at severity 'warn' when some other Tor client tries
    to establish a circuit with us using weak DH keys. It's a protocol
    violation, but that doesn't mean ordinary users need to hear about
    it. Fixes the bug part of bug 1114. Bugfix on
  • If your relay can't keep up with the number of incoming create
    cells, it would log one warning per failure into your logs. Limit
    warnings to 1 per minute. Bugfix on 0.0.2pre10; fixes bug 1042.

Minor bugfixes (build fixes):

  • Fix warnings from GCC 4.6's "-Wunused-but-set-variable" option.
  • When warning about missing zlib development packages during compile,
    give the correct package names. Bugfix on
  • Fix warnings that newer versions of autoconf produce during
    ./ These warnings appear to be harmless in our case,
    but they were extremely verbose. Fixes bug 2020.
  • Squash a compile warning on OpenBSD. Reported by Tas; fixes
    bug 1848.

Minor bugfixes (portability):

  • Write several files in text mode, on OSes that distinguish text
    mode from binary mode (namely, Windows). These files are:
    'buffer-stats', 'dirreq-stats', and 'entry-stats' on relays
    that collect those statistics; 'client_keys' and 'hostname' for
    hidden services that use authentication; and (in the tor-gencert
    utility) newly generated identity and signing keys. Previously,
    we wouldn't specify text mode or binary mode, leading to an
    assertion failure. Fixes bug 3607. Bugfix on (when
    the DirRecordUsageByCountry option which would have triggered
    the assertion failure was added), although this assertion failure
    would have occurred in tor-gencert on Windows in
  • Selectively disable deprecation warnings on OS X because Lion
    started deprecating the shipped copy of openssl. Fixes bug 3643.
  • Use a wide type to hold sockets when built for 64-bit Windows.
    Fixes bug 3270.
  • Fix an issue that prevented static linking of libevent on
    some platforms (notably Linux). Fixes bug 2698; bugfix on,
    where we introduced the "--with-static-libevent" configure option.
  • Fix a bug with our locking implementation on Windows that couldn't
    correctly detect when a file was already locked. Fixes bug 2504,
    bugfix on
  • Build correctly on OSX with zlib 1.2.4 and higher with all warnings
  • Fix IPv6-related connect() failures on some platforms (BSD, OS X).
    Bugfix on; fixes first part of bug 2660. Patch by

Minor bugfixes (code correctness):

  • Always NUL-terminate the sun_path field of a sockaddr_un before
    passing it to the kernel. (Not a security issue: kernels are
    smart enough to reject bad sockaddr_uns.) Found by Coverity;
    CID #428. Bugfix on Tor
  • Make connection_printf_to_buf()'s behaviour sane. Its callers
    expect it to emit a CRLF iff the format string ends with CRLF;
    it actually emitted a CRLF iff (a) the format string ended with
    CRLF or (b) the resulting string was over 1023 characters long or
    (c) the format string did not end with CRLF *and* the resulting
    string was 1021 characters long or longer. Bugfix on;
    fixes part of bug 3407.
  • Make send_control_event_impl()'s behaviour sane. Its callers
    expect it to always emit a CRLF at the end of the string; it
    might have emitted extra control characters as well. Bugfix on; fixes another part of bug 3407.
  • Make crypto_rand_int() check the value of its input correctly.
    Previously, it accepted values up to UINT_MAX, but could return a
    negative number if given a value above INT_MAX+1. Found by George
    Kadianakis. Fixes bug 3306; bugfix on 0.2.2pre14.
  • Fix a potential null-pointer dereference while computing a
    consensus. Bugfix on tor-, found with the help of
    clang's analyzer.
  • If we fail to compute the identity digest of a v3 legacy keypair,
    warn, and don't use a buffer-full of junk instead. Bugfix on; fixes bug 3106.
  • Resolve an untriggerable issue in smartlist_string_num_isin(),
    where if the function had ever in the future been used to check
    for the presence of a too-large number, it would have given an
    incorrect result. (Fortunately, we only used it for 16-bit
    values.) Fixes bug 3175; bugfix on
  • Be more careful about reporting the correct error from a failed
    connect() system call. Under some circumstances, it was possible to
    look at an incorrect value for errno when sending the end reason.
    Bugfix on
  • Correctly handle an "impossible" overflow cases in connection byte
    counting, where we write or read more than 4GB on an edge connection
    in a single second. Bugfix on
  • Avoid a double mark-for-free warning when failing to attach a
    transparent proxy connection. Bugfix on Fixes
    bug 2279.
  • Correctly detect failure to allocate an OpenSSL BIO. Fixes bug 2378;
    found by "cypherpunks". This bug was introduced before the first
    Tor release, in svn commit r110.
  • Fix a bug in bandwidth history state parsing that could have been
    triggered if a future version of Tor ever changed the timing
    granularity at which bandwidth history is measured. Bugfix on
  • Add assertions to check for overflow in arguments to
    base32_encode() and base32_decode(); fix a signed-unsigned
    comparison there too. These bugs are not actually reachable in Tor,
    but it's good to prevent future errors too. Found by doorss.
  • Avoid a bogus overlapped memcpy in tor_addr_copy(). Reported by
  • Set target port in get_interface_address6() correctly. Bugfix
    on and; fixes second part of bug 2660.
  • Fix an impossible-to-actually-trigger buffer overflow in relay
    descriptor generation. Bugfix on
  • Fix numerous small code-flaws found by Coverity Scan Rung 3.

Minor bugfixes (code improvements):

  • After we free an internal connection structure, overwrite it
    with a different memory value than we use for overwriting a freed
    internal circuit structure. Should help with debugging. Suggested
    by bug 1055.
  • If OpenSSL fails to make a duplicate of a private or public key, log
    an error message and try to exit cleanly. May help with debugging
    if bug 1209 ever remanifests.
  • Some options used different conventions for uppercasing of acronyms
    when comparing manpage and source. Fix those in favor of the
    manpage, as it makes sense to capitalize acronyms.
  • Take a first step towards making or.h smaller by splitting out
    function definitions for all source files in src/or/. Leave
    structures and defines in or.h for now.
  • Remove a few dead assignments during router parsing. Found by
  • Don't use 1-bit wide signed bit fields. Found by coverity.
  • Avoid signed/unsigned comparisons by making SIZE_T_CEILING unsigned.
    None of the cases where we did this before were wrong, but by making
    this change we avoid warnings. Fixes bug 2475; bugfix on
  • The memarea code now uses a sentinel value at the end of each area
    to make sure nothing writes beyond the end of an area. This might
    help debug some conceivable causes of bug 930.
  • Always treat failure to allocate an RSA key as an unrecoverable
    allocation error.
  • Add some more defensive programming for architectures that can't
    handle unaligned integer accesses. We don't know of any actual bugs
    right now, but that's the best time to fix them. Fixes bug 1943.

Minor bugfixes (misc):

  • Fix a rare bug in rend_fn unit tests: we would fail a test when
    a randomly generated port is 0. Diagnosed by Matt Edman. Bugfix
    on; fixes bug 1808.
  • Where available, use Libevent 2.0's periodic timers so that our
    once-per-second cleanup code gets called even more closely to
    once per second than it would otherwise. Fixes bug 943.
  • Ignore OutboundBindAddress when connecting to localhost.
    Connections to localhost need to come _from_ localhost, or else
    local servers (like DNS and outgoing HTTP/SOCKS proxies) will often
    refuse to listen.
  • Update our OpenSSL 0.9.8l fix so that it works with OpenSSL 0.9.8m
  • If any of the v3 certs we download are unparseable, we should
    actually notice the failure so we don't retry indefinitely. Bugfix
    on 0.2.0.x; reported by "rotator".
  • When Tor fails to parse a descriptor of any kind, dump it to disk.
    Might help diagnosing bug 1051.
  • Make our 'torify' script more portable; if we have only one of
    'torsocks' or 'tsocks' installed, don't complain to the user;
    and explain our warning about tsocks better.
  • Fix some urls in the exit notice file and make it XHTML1.1 strict
    compliant. Based on a patch from Christian Kujau.

Documentation changes:

  • Modernize the doxygen configuration file slightly. Fixes bug 2707.
  • Resolve all doxygen warnings except those for missing documentation.
    Fixes bug 2705.
  • Add doxygen documentation for more functions, fields, and types.
  • Convert the HACKING file to asciidoc, and add a few new sections
    to it, explaining how we use Git, how we make changelogs, and
    what should go in a patch.
  • Document the default socks host and port ( for
  • Removed some unnecessary files from the source distribution. The
    AUTHORS file has now been merged into the people page on the
    website. The roadmaps and design doc can now be found in the
    projects directory in svn.

Deprecated and removed features (config):

  • Remove the torrc.complete file. It hasn't been kept up to date
    and users will have better luck checking out the manpage.
  • Remove the HSAuthorityRecordStats option that version 0 hidden
    service authorities could use to track statistics of overall v0
    hidden service usage.
  • Remove the obsolete "NoPublish" option; it has been flagged
    as obsolete and has produced a warning since
  • Caches no longer download and serve v2 networkstatus documents
    unless FetchV2Networkstatus flag is set: these documents haven't
    haven't been used by clients or relays since 0.2.0.x. Resolves
    bug 3022.

Deprecated and removed features (controller):

  • The controller no longer accepts the old obsolete "addr-mappings/"
    or "unregistered-servers-" GETINFO values.
  • The EXTENDED_EVENTS and VERBOSE_NAMES controller features are now
    always on; using them is necessary for correct forward-compatible

Deprecated and removed features (misc):

  • Hidden services no longer publish version 0 descriptors, and clients
    do not request or use version 0 descriptors. However, the old hidden
    service authorities still accept and serve version 0 descriptors
    when contacted by older hidden services/clients.
  • Remove undocumented option "-F" from tor-resolve: it hasn't done
    anything since
  • Remove everything related to building the expert bundle for OS X.
    It has confused many users, doesn't work right on OS X 10.6,
    and is hard to get rid of once installed. Resolves bug 1274.
  • Remove support for .noconnect style addresses. Nobody was using
    them, and they provided another avenue for detecting Tor users
    via application-level web tricks.
  • When we fixed bug 1038 we had to put in a restriction not to send
    RELAY_EARLY cells on rend circuits. This was necessary as long
    as relays using Tor through were
    active. Now remove this obsolete check. Resolves bug 2081.
  • Remove workaround code to handle directory responses from servers
    that had bug 539 (they would send HTTP status 503 responses _and_
    send a body too). Since only server versions before were affected, there is no longer reason to
    keep the workaround in place.
  • Remove the old 'fuzzy time' logic. It was supposed to be used for
    handling calculations where we have a known amount of clock skew and
    an allowed amount of unknown skew. But we only used it in three
    places, and we never adjusted the known/unknown skew values. This is
    still something we might want to do someday, but if we do, we'll
    want to do it differently.
  • Remove the "--enable-iphone" option to ./configure. According to
    reports from Marco Bonetti, Tor builds fine without any special
    tweaking on recent iPhone SDK versions.

Tor released

On November 19, we released the latest in the Tor alpha series, version This release lays the groundwork for many upcoming features:
support for the new lower-footprint "microdescriptor" directory design,
future-proofing our consensus format against new hash functions or
other changes, and an Android port. It also makes Tor compatible with
the upcoming OpenSSL 0.9.8l release, and fixes a variety of bugs.

It can be downloaded at

Major features:

  • Directory authorities can now create, vote on, and serve multiple
    parallel formats of directory data as part of their voting process.
    Partially implements Proposal 162: "Publish the consensus in
    multiple flavors".

  • Directory authorities can now agree on and publish small summaries
    of router information that clients can use in place of regular
    server descriptors. This transition will eventually allow clients read more »

Syndicate content Syndicate content