Nine Questions about Hidden Services

This is an interview with a Tor developer who works on hidden services. Please note that Tor Browser and hidden services are two different things. Tor Browser (downloadable at allows you to browse, or surf, the web, anonymously. A hidden service is a site you visit or a service you use that uses Tor technology to stay secure and, if the owner wishes, anonymous. The secure messaging app ricochet is an example of a hidden service. Tor developers use the terms "hidden services" and "onion services" interchangeably.
1. What are your priorities for onion services development?
Personally I think it’s very important to work on the security of hidden services; that’s a big priority.
The plan for the next generation of onion services includes enhanced security as well as improved performance. We’ve broken the development down into smaller modules and we’re already starting to build the foundation. The whole thing is a pretty insane engineering job.

2. What don't people know about onion Services?
Until earlier this year, hidden services were a labor of love that Tor developers did in their spare time. Now we have a very small group of developers, but in 2016 we want to move the engineering capacity a bit farther out. There is a lot of enthusiasm within Tor for hidden services but we need funding and more high level developers to build the next generation.
3. What are some of Tor's plans for mitigating attacks?
The CMU attack was fundamentally a "guard node" attack; guard nodes are the first hop of a Tor circuit and hence the only part of the network that can see the real IP address of a hidden service. Last July we fixed the attack vector that CMU was using (it was called the RELAY_EARLY confirmation attack) and since then we've been divising improved designs for guard node security.
For example, in the past, each onion service would have three guard nodes assigned to it. Since last September, each onion service only uses one guard node—-it exposes itself to fewer relays. This change alone makes an attack against an onion service much less likely.
Several of our developers are thinking about how to do better guard node selection. One of us is writing code on this right now.
We are modeling how onion services pick guard nodes currently, and we're simulating other ways to do it to see which one exposes itself to fewer relays—the fewer relays you are exposed to, the safer you are.
We’ve also been working on other security things as well. For instance, a series of papers and talks have abused the directory system of hidden services to try to estimate the activity of particular hidden services, or to launch denial-of-service attacks against hidden services.

We’re going to fix this by making it much harder for the attacker's nodes to become the responsible relay of a hidden service (say, catfacts) and be able to track uptime and usage information. We will use a "distributed random number generator"--many computers teaming up to generate a single, fresh unpredictable random number. 

Another important thing we're doing is to make it impossible for a directory service to harvest addresses in the new design. If you don't know a hidden service address, then under the new system, you won't find it out just by hosting its HSDir entry.

There are also interesting performance things: We want to make .onion services scalable in large infrastructures like Facebook--we want high availability and better load balancing; we want to make it serious.

[Load balancing distributes the traffic load of a website to multiple servers so that no one server gets overloaded with all the users. Overloaded servers stop responding and create other problems. An attack that purposely overloads a website to cause it to stop responding is called a Denial of Service (DoS) attack.  - Kate]
There are also onion services that don’t care to stay hidden, like Blockchain or Facebook; we can make those much faster, which is quite exciting.
Meanwhile Nick is working on a new encryption design--magic circuit crypto that will make it harder to do active confirmation attacks. [Nick Mathewson is the co-founder of the Tor Project and the chief architect of our software.] Active confirmation attacks are much more powerful than passive attacks, and we can do a better job at defending against them.
A particular type of confirmation attack that Nick's new crypto is going to solve is a "tagging attack"—Roger wrote a blog post about them years ago called, "One Cell Is Enough"—it was about how they work and how they are powerful.
4. Do you run an onion service yourself?  
Yes, I do run onion services; I run an onion services on every box I have.  I connect to the PC in my house from anywhere in the world through SSH—I connect to my onion service instead of my house IP. People can see my laptop accessing Tor but don’t know who I am or where I go. 
Also, onion services have a property called NAT-punching; (NAT=Network Address Translation). NAT blocks incoming connections;it builds walls around you. Onion services have NAT punching and can penetrate a firewall. In my university campus, the firewall does not allow incoming connections to my SSH server, but with an onion service the firewall is irrelevant.
5. What is your favorite onion service that a nontechnical person might use? 
I use ricochet for my peer to peer chatting--It has a very nice UI and works well.
6. Do you think it’s safe to run an onion service? 
It depends on your adversary. I think onion services provide adequate security against most real life adversaries.

However, if a serious and highly motivated adversary were after me, I would not rely solely on the security of onion services. If your adversary can wiretap the whole Western Internet, or has a million dollar budget, and you only depend on hidden services for your anonymity then you should probably up your game. You can add more layers of anonymity by buying the servers you host your hidden service on anonymously (e.g. with bitcoin) so that even if they deanonymize you, they can't get your identity from the server. Also studying and actually understanding the Tor protocol and its threat model is essential practice if you are defending against motivated adversaries.

7. What onion services don’t exist yet that you would like to see? 
Onion services right now are super-volatile; they may appear for three months and then they disappear. For example, there was a Twitter clone, Tor statusnet; it was quite fun--small but cozy. The guy or girl who was running it couldn’t do it any longer. So, goodbye! It would be very nice to have a Twitter clone in onion services. Everyone would be anonymous. Short messages by anonymous people would be an interesting thing.
I would like to see apps for mobile phones using onion services more—SnapChat over Tor, Tinder over Tor—using Orbot or whatever. 
A good search engine for onion services. This volatility comes down to not having a search engine—you could have a great service, but only 500 sketchoids on the Internet might know about it.
Right now, hidden services are misty and hard to see, with the fog of war all around. A sophisticated search engine could highlight the nice things and the nice communities; those would get far more traffic and users and would stay up longer.

The second question is how you make things. For many people, it’s not easy to set up an onion service. You have to open Tor, hack some configuration files, and there's more.

We need a system where you double click, and bam, you have an onion service serving your blog. Griffin Boyce is developing a tool for this named Stormy. If we have a good search engine and a way for people to start up onion services easily, we will have a much nicer and more normal Internet in the onion space. 
8. What is the biggest misconception about onion services?
People don't realize how many use cases there are for onion services or the inventive ways that people are using them already. Only a few onion services ever become well known and usually for the wrong reasons.
I think it ties back to the previous discussion--—the onion services we all enjoy have no way of getting to us. Right now, they are marooned on their island of hiddenness. 
9. What is the biggest misconception about onion services development?
It’s a big and complex project—it’s building a network inside a network; building a thing inside a thing. But we are a tiny team. We need the resources and person power to do it.

(Interview conducted by Kate Krauss)

Preliminary analysis of Hacking Team's slides

A few weeks ago, Hacking Team was bragging publicly about a Tor Browser exploit. We've learned some details of their proposed attack from a leaked powerpoint presentation that was part of the Hacking Team dump.

The good news is that they don't appear to have any exploit on Tor or on Tor Browser. The other good news is that their proposed attack doesn't scale well. They need to put malicious hardware on the local network of their target user, which requires choosing their target, locating her, and then arranging for the hardware to arrive in the right place. So it's not really practical to launch the attack on many Tor users at once.

But they actually don't need an exploit on Tor or Tor Browser. Here's the proposed attack in a nutshell:

1) Pick a target user (say, you), figure out how you connect to the Internet, and install their attacking hardware on your local network (e.g. inside your ISP).

2) Wait for you to browse the web without Tor Browser, i.e. with some other browser like Firefox or Chrome or Safari, and then insert some sort of exploit into one of the web pages you receive (maybe the Flash 0-day we learned about from the same documents, or maybe some other exploit).

3) Once they've taken control of your computer, they configure your Tor Browser to use a socks proxy on a remote computer that they control. In effect, rather than using the Tor client that's part of Tor Browser, you'll be using their remote Tor client, so they get to intercept and watch your traffic before it enters the Tor network.

You have to stop them at step two, because once they've broken into your computer, they have many options for attacking you from there.

Their proposed attack requires Hacking Team (or your government) to already have you in their sights. This is not mass surveillance — this is very targeted surveillance.

Another answer is to run a system like Tails, which avoids interacting with any local resources. In this case there should be no opportunity to insert an exploit from the local network. But that's still not a complete solution: some coffeeshops, hotels, etc will demand that you interact with their local login page before you can access the Internet. Tails includes what they call their 'unsafe' browser for these situations, and you're at risk during that brief period when you use it.

Ultimately, security here comes down to having safer browsers. We continue to work on ways to make Tor Browser more resilient against attacks, but the key point here is that they'll go after the weakest link on your system — and at least in the scenarios they describe, Tor Browser isn't the weakest link.

As a final point, note that this is just a powerpoint deck (probably a funding pitch), and we've found no indication yet that they ever followed through on their idea.

We'll update you with more information if we learn anything further. Stay safe out there!

iSEC Partners Conducts Tor Browser Hardening Study

In May, the Open Technology Fund commissioned iSEC Partners to study current and future hardening options for the Tor Browser. The Open Technology Fund is the primary funder of Tor Browser development, and it commissions security analysis and review for all of the projects that it funds as a standard practice. We worked with iSEC to define the scope of the engagement to focus on the following six main areas:

  1. Review of the current state of hardening in Tor Browser
  2. Investigate additional hardening options and instrumentation
  3. Perform historical vulnerability analysis on Firefox, in order to make informed vulnerability surface reduction recommendations
  4. Investigate image, audio, and video codecs and their respective library's vulnerability history
  5. Review our current about:config settings, both for vulnerability surface reduction and security
  6. Review alternate/obscure protocol and application handlers

The complete report is available in the iSEC publications github repo. All tickets related to the report can be found using the tbb-isec-report keyword. General Tor Browser security tickets can be found using the tbb-security keyword.

Major Findings and Recommendations

The report had the following high-level findings and recommendations.

  • Address Space Layout Randomization is disabled on Windows and Mac

  • Due to our use of cross-compilation and non-standard toolchains in our reproducible build system, several hardening features have ended up disabled. We have known about the Windows issues prior to this report, and should have a fix for them soon. However, the MacOS issues are news to us, and appear to require that we build 64 bit versions of the Tor Browser for full support. The parent ticket for all basic hardening issues in Tor Browser is bug #10065.

  • Participate in Pwn2Own

  • iSEC recommended that we find a sponsor to fund a Pwn2Own reward for bugs specific to Tor Browser in a semi-hardened configuration. We are very interested in this idea and would love to talk with anyone willing to sponsor us in this competition, but we're not yet certain that our hardening options will have stabilized with enough lead time for the 2015 contest next March.

  • Test and recommend the Microsoft Enhanced Mitigation Experience Toolkit on Windows

  • The Microsoft Enhanced Mitigation Experience Toolkit is an optional toolkit that Windows users can run to further harden Tor Browser against exploitation. We've created bug #12820 for this analysis.

  • Replace the Firefox memory allocator (jemalloc) with ctmalloc/PartitionAlloc

  • PartitionAlloc is a memory allocator designed by Google specifically to mitigate common heap-based vulnerabilities by hardening free lists, creating partitioned allocation regions, and using guard pages to protect metadata and partitions. Its basic hardening features can be picked up by using it as a simple malloc replacement library (as ctmalloc). Bug #10281 tracks this work.

  • Make use of advanced ParitionAlloc features and other instrumentation to reduce the risk from use-after-free vulnerabilities

  • The iSEC vulnerability review found that the overwhelming majority of vulnerabilities to date in Firefox were use-after-free, followed closely by general heap corruption. In order to mitigate these vulnerabilities, we would need to make use of the heap partitioning features of PartitionAlloc to actually ensure that allocations are partitioned (for example, by using the existing tags from Firefox's about:memory). We will also investigate enabling assertions in limited areas of the codebase, such as the refcounting system, the JIT and the Javascript engine.

Vulnerability Surface Reduction (Security Slider)

A large portion of the report was also focused on analyzing historical Firefox vulnerability data and other sources of large vulnerability surface for a planned "Security Slider" UI in Tor Browser.

The Security Slider was first suggested by Roger Dingledine as a way to make it easy for users to trade off between functionality and security, gradually disabling features ranked by both vulnerability count and web prevalence/usability impact.

The report makes several recommendations along these lines, but a brief distillation can be found on the ticket for the slider.

At a high level, we plan for four levels in this slider. "Low" security will be the current Tor Browser settings, with the addition of JIT support. "Medium-Low" will disable most of the JIT, and make HTML5 media click-to-play via NoScript. "Medium-High" will disable the rest of the JIT, will disable JS on non-HTTPS url bar origins, and disable SVG. "High" will fully disable Javascript, block remote fonts via NoScript, and disable all media codecs except for WebM (which will remain click-to-play).

The Long Term

A web browser is a very large and complicated piece of software, and while we believe that the privacy properties of Tor Browser are better than those of every other web browser currently available, it is very important to us that we raise the bar to successful code execution and exploitation of Tor Browser as well.

We are very eager to see the deployment of sandboxing support in Firefox, which should go a long way to improving the security of Tor Browser as well. To improve security for their users, Mozilla has recently shifted 10 engineers into the Electrolysis project, which provides the groundwork for producing a multiprocess sandbox architecture for the desktop Firefox. This will allow them to provide a Google Chrome style security sandbox for website content, to reduce the risk from software vulnerabilities, and generally impede exploitability.

Until that time, we will also be investigating providing hardened builds of Tor Browser using the AddressSanitizer and Virtual Table Verification features of newer GCC releases. While this will not eliminate all vectors of memory corruption-based exploitation (in particular, the hardening properties of AddressSanitizer are not as good as those provided by SoftBounds+CETS for example, but that compiler is not yet production-ready), it should raise the bar to exploitation. We are hopeful that these builds in combination with PartitionAlloc and the Security Slider will satisfy the needs of our users who require high security and who are willing to trade performance and usability in order to get it.

We also hope to include optional application-wide sandboxes for Tor Browser as part of the official distribution.

Why not Google Chrome?

It is no secret that in many ways, both we and Mozilla are playing catch-up to reach the level of code execution security provided by Google Chrome, and in fact closely following the Google Chrome security team was one of the recommendations of the iSEC report.

In particular, Google Chrome benefits from a multiprocess sandboxing architecture, as well as several further hardening options and innovations (such as PartitionAlloc).

Unfortunately, our budget for the browser project is still very constrained compared to the amount of work that is required to provide the privacy properties we feel are important, and Firefox remains a far more cost-effective platform for us for several reasons. In particular, Firefox's flexible extension system, fully scriptable UI, solid proxy support, and its long Extended Support Release cycle all allow us to accomplish far more with fewer resources than we could with any other web browser.

Further, Google Chrome is far less amenable to supporting basic web privacy and Tor-critical features (such as solid proxy support) than Mozilla Firefox. Initial efforts to work with the Google Chrome team saw some success in terms of adding APIs that are crucial to addons such as HTTPS-Everywhere, but we ran into several roadblocks when it came to Tor-specific features and changes. In particular, several bugs required for basic proxy-safe Tor support for Google Chrome's Incognito Mode ended up blocked for various reasons.

The worst offender on this front is the use of the Microsoft Windows CryptoAPI for certificate validation, without any alternative. This bug means that certificate revocation checking and intermediate certificate retrieval happen outside of the browser's proxy settings, and is subject to alteration by the OEM and/or the enterprise administrator. Worse, beyond the Tor proxy issues, the use of this OS certificate validation API means that the OEM and enterprise also have a simple entry point for installing their own root certificates to enable transparent HTTPS man-in-the-middle, with full browser validation and no user consent or awareness.

All of this is not to mention the need for defenses against third party tracking and fingerprinting to prevent the linking of Tor activity to non-Tor usage, and which would also be useful for the wider non-Tor userbase.

While we'd love for this situation to change, and are open to working with Google to improve things, at present it means that our only option for Chrome is to maintain an even more invasive fork than our current Firefox patch set, with much less likelihood of a future merge than with Firefox. As a ballpark estimate, maintaining such a fork would require somewhere between 3 and 5 times the engineering staff and infrastructure we currently have at our disposal, in addition to the ramp-up time to port our current feature set over.

Unless either our funding situation or Google's attitude towards the features we require changes, Mozilla Firefox will remain the best platform for us to demonstrate that it is in fact possible to provide true privacy by design for the web for those who want it. It is very distressing that this means playing catch-up and forcing our users to make usability tradeoffs in exchange for improved browser security, but we will continue to do what we can to improve that situation, both with Mozilla and with our own independent efforts.

Mission Impossible: Hardening Android for Security and Privacy

Updates: See the Changes section for a list of changes since initial posting.

Executive Summary

The future is here, and ahead of schedule. Come join us, the weather's nice.

This blog post describes the installation and configuration of a prototype of a secure, full-featured, Android telecommunications device with full Tor support, individual application firewalling, true cell network baseband isolation, and optional ZRTP encrypted voice and video support. ZRTP does run over UDP which is not yet possible to send over Tor, but we are able to send SIP account login and call setup over Tor independently.

The SIP client we recommend also supports dialing normal telephone numbers if you have a SIP gateway that provides trunking service.

Aside from a handful of binary blobs to manage the device firmware and graphics acceleration, the entire system can be assembled (and recompiled) using only FOSS components. However, as an added bonus, we will describe how to handle the Google Play store as well, to mitigate the two infamous Google Play Backdoors.


Android is the most popular mobile platform in the world, with a wide variety of applications, including many applications that aid in communications security, censorship circumvention, and activist organization. Moreover, the core of the Android platform is Open Source, auditable, and modifiable by anyone.

Unfortunately though, mobile devices in general and Android devices in particular have not been designed with privacy in mind. In fact, they've seemingly been designed with nearly the opposite goal: to make it easy for third parties, telecommunications companies, sophisticated state-sized adversaries, and even random hackers to extract all manner of personal information from the user. This includes the full content of personal communications with business partners and loved ones. Worse still, by default, the user is given very little in the way of control or even informed consent about what information is being collected and how.

This post aims to address this, but we must first admit we stand on the shoulders of giants. Organizations like Cyanogen, F-Droid, the Guardian Project, and many others have done a great deal of work to try to improve this situation by restoring control of Android devices to the user, and to ensure the integrity of our personal communications. However, all of these projects have shortcomings and often leave gaps in what they provide and protect. Even in cases where proper security and privacy features exist, they typically require extensive configuration to use safely, securely, and correctly.

This blog post enumerates and documents these gaps, describes workarounds for serious shortcomings, and provides suggestions for future work.

It is also meant to serve as a HOWTO to walk interested, technically capable people through the end-to-end installation and configuration of a prototype of a secure and private Android device, where access to the network is restricted to an approved list of applications, and all traffic is routed through the Tor network.

It is our hope that this work can be replicated and eventually fully automated, given a good UI, and rolled into a single ROM or ROM addon package for ease of use. Ultimately, there is no reason why this system could not become a full fledged off the shelf product, given proper hardware support and good UI for the more technical bits.

The remainder of this document is divided into the following sections:

  1. Hardware Selection
  2. Installation and Setup
  3. Google Apps Setup
  4. Recommended Software
  5. Device Backup Procedure
  6. Removing the Built-in Microphone
  7. Removing Baseband Remnants
  8. Future Work
  9. Changes Since Initial Posting

Hardware Selection

If you truly wish to secure your mobile device from remote compromise, it is necessary to carefully select your hardware. First and foremost, it is absolutely essential that the carrier's baseband firmware is completely isolated from the rest of the platform. Because your cell phone baseband does not authenticate the network (in part to allow roaming), any random hacker with their own cell network can exploit these backdoors and use them to install malware on your device.

While there are projects underway to determine which handsets actually provide true hardware baseband isolation, at the time of this writing there is very little public information available on this topic. Hence, the only safe option remains a device with no cell network support at all (though cell network connectivity can still be provided by a separate device). For the purposes of this post, the reference device is the WiFi-only version of the 2013 Google Nexus 7 tablet.

For users who wish to retain full mobile access, we recommend obtaining a cell modem device that provides a WiFi access point for data services only. These devices do not have microphones and in some cases do not even have fine-grained GPS units (because they are not able to make emergency calls). They are also available with prepaid plans, for rates around $20-30 USD per month, for about 2GB/month of 4G data. If coverage and reliability is important to you though, you may want to go with a slightly more expensive carrier. In the US, T-Mobile isn't bad, but Verizon is superb.

To increase battery life of your cell connection, you can connect this access point to an external mobile USB battery pack, which typically will provide 36-48 hours of continuous use with a 6000mAh battery.

The total cost of a Wifi-only tablet with cell modem and battery pack is only roughly USD $50 more than the 4G LTE version of the same device.

In this way, you achieve true baseband isolation, with no risk of audio or network surveillance, baseband exploits, or provider backdoors. Effectively, this cell modem is just another untrusted router in a long, long chain of untrustworthy Internet infrastructure.

However, do note though that even if the cell unit does not contain a fine-grained GPS, you still sacrifice location privacy while using it. Over an extended period of time, it will be possible to make inferences about your physical activity, behavior and personal preferences, and your identity, based on cell tower use alone.

Installation and Setup

We will focus on the installation of Cyanogenmod 11 using Team Win Recovery Project, both to give this HOWTO some shelf life, and because Cyanogenmod 11 features full SELinux support (Dear NSA: What happened to you guys? You used to be cool. Well, some of you. Some of the time. Maybe. Or maybe not).

The use of Google Apps and Google Play services is not recommended due to security issues with Google Play. However, we do provide workarounds for mitigating those issues, if Google Play is required for your use case.

Installation and Setup: ROM and Core App Installation

With the 2013 Google Nexus 7 tablet, installation is fairly straight-forward. In fact, it is actually possible to install and use the device before associating it with a Google Account in any way. This is a desirable property, because by default, the otherwise mandatory initial setup process of the stock Google ROM sends your device MAC address directly to Google and links it to your Google account (all without using Tor, of course).

The official Cyanogenmod installation instructions are available online, but with a fresh out of the box device, here are the key steps for installation without activating the default ROM code at all (using Team Win Recovery Project instead of ClockWorkMod).

First, on your desktop/laptop computer (preferably Linux), perform the following:

  1. Download the latest CyanogenMod 11 release (we used cm-11-20140504-SNAPSHOT-M6)
  2. Download the latest Team Win Recovery Project image (we used
  3. Download the F-Droid package (we used 0.66)
  4. Download the Orbot package from F-Droid (we used 13.0.7)
  5. Download the Droidwall package from F-Droid (we used 1.5.7)
  6. Download the Droidwall Firewall Scripts attached to this blogpost
  7. Download the Google Apps for Cyanogenmod 11 (optional)

Because the download integrity for all of these packages is abysmal, here is a signed set of SHA256 hashes I've observed for those packages.

Once you have all of those packages, boot your tablet into fastboot mode by holding the Power button and the Volume Down button during a cold boot. Then, attach it to your desktop/laptop machine with a USB cable and run the following commands from a Linux/UNIX shell:

 apt-get install android-tools-adb android-tools-fastboot
 fastboot devices
 fastboot oem unlock
 fastboot flash recovery openrecovery-twrp-

After the recovery firmware is flashed successfully, use the volume keys to select Recovery and hit the power button to reboot the device (or power it off, and then boot holding Power and Volume Up).

Once Team Win boots, go into Wipe and select Advanced Wipe. Select all checkboxes except for USB-OTG, and slide to wipe. Once the wipe is done, click Format Data. After the format completes, issue these commands from your Linux shell:

 adb server start
 adb push /sdcard/
 adb push /sdcard/ # Optional

After this push process completes, go to the Install menu, and select the Cyanogen zip, and optionally the gapps zip for installation. Then click Reboot, and select System.

After rebooting into your new installation, skip all CyanogenMod and Google setup, disable location reporting, and immediately disable WiFi and turn on Airplane mode.

Then, go into Settings -> About Tablet and scroll to the bottom and click the greyed out Build number 5 times until developer mode is enabled. Then go into Settings -> Developer Options and turn on USB Debugging.

After that, run the following commands from your Linux shell:

 adb install FDroid.apk
 adb install org.torproject.android_86.apk
 adb install com.googlecode.droidwall_157.apk

You will need to approve the ADB connection for the first package, and then they should install normally.

VERY IMPORTANT: Whenever you finish using adb, always remember to disable USB Debugging and restore Root Access to Apps only. While Android 4.2+ ROMs now prompt you to authorize an RSA key fingerprint before allowing a debugging connection (thus mitigating adb exploit tools that bypass screen lock and can install root apps), you still risk additional vulnerability surface by leaving debugging enabled.

Installation and Setup: Initial Configuration

After the base packages are installed, go into the Settings app, and make the following changes:

  1. Wireless & Networks More... =>
    • Temporarily Disable Airplane Mode
    • NFC -> Disable
    • Re-enable Airplane Mode
  2. Location Access -> Off
  3. Security =>
    • PIN screen Lock
    • Allow Unknown Sources (For F-Droid)
  4. Language & Input =>
    • Spell Checker -> Android Spell Checker -> Disable Contact Names
    • Disable Google Voice Typing/Hotword detection
    • Android Keyboard (AOSP) =>
      • Disable AOSP next-word suggestion (do this first!)
      • Auto-correction -> Off
  5. Backup & reset =>
    • Enable Back up my data (just temporarily, for the next step)
    • Uncheck Automatic restore
    • Disable Backup my data
  6. About Tablet -> Cyanogenmod Statistics -> Disable reporting
  7. Developer Options -> Device Hostname -> localhost
  8. SuperUser -> Settings (three dots) -> Notifications -> Notification (not toast)
  9. Privacy -> Privacy Guard =>
    • Enabled by default
    • Settings (three dots) -> Show Built In Apps
    • Enable Privacy Guard for every app with the following exceptions:
      • Calendar
      • Config Updater
      • Google Account Manager (long press)
        • Modify Settings -> Off
        • Wifi Change -> Off
        • Data Change -> Off
      • Google Play Services (long press)
        • Location -> Off
        • Modify Settings -> Off
        • Draw on top -> Off
        • Record Audio -> Off
        • Wifi Change -> Off
      • Google Play Store (long press)
        • Location -> Off
        • Send SMS -> Off
        • Modify Settings -> Off
        • Data change -> Off
      • Google Services Framework (long press)
        • Modify Settings -> Off
        • Wifi Change -> Off
        • Data Change -> Off
      • Trebuchet

Now, it is time to encrypt your tablet. It is important to do this step early, as I have noticed additional apps and configuration tweaks can make this process fail later on.

We will also do this from the shell, in order to set a different password than your screen unlock pin. This is done to mitigate the risk of compromise of this password from shoulder surfers, and to allow the use of a much longer (and non-numeric) password that you would prefer not to type every time you unlock the screen.

To do this, open the Terminal app, and type the following commands:

vdc cryptfs enablecrypto inplace NewMoreSecurePassword

Watch for typos! That command does not ask you to re-type that password for confirmation.

Installation and Setup: Disabling Invasive Apps and Services

Before you configure the Firewall or enable the network, you likely want to disable at least a subset of the following built-in apps and services, by using Settings -> Apps -> All, and then clicking on each app and hitting the Disable button:

  • Face Unlock
  • Google Backup Transport
  • Google Calendar Sync
  • Google One Time Init
  • Google Partner Setup
  • Google Contacts Sync
  • Google Search
  • Hangouts
  • Market Feedback Agent
  • News & Weather
  • One Time Init
  • Picasa Updater
  • Sound Search for Google Play
  • TalkBack

Installation and Setup: Tor and Firewall configuration

Ok, now let's install the firewall and tor support scripts. Go back into Settings -> Developer Options and enable USB Debugging and change Root Access to Apps and ADB. Then, unzip the on your laptop, and run the installation script:


That firewall installation provides several key scripts that provide functionality that is currently impossible to achieve with any app (including Orbot):

  1. It installs a userinit script to block all network access during boot.
  2. It disables "Google Captive Portal Detection", which involves connection attempts to Google servers upon Wifi assocation (these requests are made by the Android Settings UID, which should normally be blocked from the network, unless you are first registering for Google Play).
  3. It contains a Droidwall script that configures Tor transproxy rules to send all of your traffic through Tor. These rules include a fix for a Linux kernel Tor transproxy packet leak issue.
  4. The main Droidwall script also includes an input firewall, to block all inbound connections to the device. It also fixes a Droidwall permissions vulnerability
  5. It installs an optional script to allow the Browser app to bypass Tor for logging into WiFi captive portals.
  6. It installs an optional script to temporarily allow network adb access when you need it (if you are paranoid about USB exploits, which you should be).
  7. It provides an optional script to allow the UDP activity of LinPhone to bypass Tor, to allow ZRTP-encrypted Voice and Video SIP/VoIP calls. SIP account login/registration and call setup/signaling can be done over TCP, and Linphone's TCP activity is still sent through Tor with this script.

Note that with the exception of the userinit network blocking script, installing these scripts does not activate them. You still need to configure Droidwall to use them.

We use Droidwall instead of Orbot or AFWall+ for five reasons:

  1. Droidwall's app-based firewall and Orbot's transproxy are known to conflict and reset one another.
  2. Droidwall does not randomly drop transproxy rules when switching networks (Orbot has had several of these types of bugs).
  3. Unlike AFWall+, Droidwall is able to auto-launch at "boot" (though still not before the network and Android Services come online and make connections).
  4. AFWall+'s "fix" for this startup data leak problem does not work on Cyanogenmod (hence our userinit script instead).
  5. Aside from the permissions issue fixed by our script, AFWall+ provides no additional security fixes over the stock Droidwall.

To make use of the firewall scripts, open up Droidwall and hit the config button (the vertical three dots), go to More -> Set Custom Script. Enter the following:

. /data/local/
#. /data/local/
#. /data/local/
#. /data/local/

NOTE: You must not make any typos in the above. If you mistype any of those files, things may break. Because the script blocks all network at boot, if you make a typo in the torify script, you will be unable to use the Internet at all!

Also notice that these scripts have been installed into a readonly root directory. Because they are run as root, installing them to a world-writable location like /sdcard/ is extremely unwise.

Later, if you want to enable one of network adb, LinPhone UDP, or captive portal login, go back into this window and remove the leading comment ('#') from the appropriate lines (this is obviously one of the many aspects of this prototype that could benefit from real UI).

Then, configure the apps you want to allow to access the network. Note that the only Android system apps that must access the network are:

  • CM Updater
  • Downloads, Media Storage, Download Manager
  • F-Droid

Orbot's network access is handled via the main script. You do not need to enable full network access to Orbot in Droidwall.

The rest of the apps you can enable at your discretion. They will all be routed through Tor automatically.

Once Droidwall is configured, you can click on the Menu (three dots) and click the "Firewall Disabled" button to enable the firewall. Then, you can enable Orbot. Do not grant Orbot superuser access. It still opens the transproxy ports you need without root, and Droidwall is managing installation of the transproxy rules, not Orbot.

You are now ready to enable Wifi and network access on your device. For vulnerability surface reduction, you may want to use the Advanced Options -> Static IP to manually enter an IP address for your device to avoid using dhclient. You do not need a DNS server, and can safely set it to

Google Apps Setup

If you installed the Google Apps zip, you need to do a few things now to set it up, and to further harden your device. If you opted out of Google Apps, you can skip to the next section.

Google Apps Setup: Initializing Google Play

The first time you use Google Play, you will need to enable four apps in Droidwall: "Google Account Manager, Google Play Services...", "Settings, Dev Tools, Fused Location...", "Gmail", and "Google Play" itself.

If you do not have a Google account, your best bet is to find open wifi to create one, as Google will often block accounts created through Tor, even if you use an Android device.

After you log in for the first time, you should be able to disable the "Google Account Manager, Google Play Services...", "Gmail", and the "Settings..." apps in Droidwall, but your authentication tokens in Google Play may expire periodically. If this happens, you should only need to temporarily enable the "Google Account Manager, Google Play Services..." app in Droidwall to obtain new ones.

Google Apps Setup: Mitigating the Google Play Backdoors

If you do choose to use Google Play, you need to be very careful about how you allow it to access the network. In addition to the risks associated with using a proprietary App Store that can send you targeted malware-infected packages based on your Google Account, it has at least two major user experience flaws:

  1. Anyone who is able to gain access to your Google account can silently install root or full permission apps without any user interaction what-so-ever. Once installed, these apps can retroactively clear what little installation notification and UI-based evidence of their existence there was in the first place.
  2. The Android Update Process does not inform the user of changes in permissions of pending update apps that happen to get installed after an Android upgrade.

The first issue can be mitigated by ensuring that Google Play does not have access to the network when not in use, by disabling it in Droidwall. If you do not do this, apps can be installed silently behind your back. Welcome to the Google Experience.

For the second issue, you can install the SecCheck utility, to monitor your apps for changes in permissions during a device upgrade.

Google Apps Setup: Disabling Google Cloud Messaging

If you have installed the Google Apps zip, you have also enabled a feature called Google Cloud Messaging.

The Google Cloud Messaging Service allows apps to register for asynchronous remote push notifications from Google, as well as send outbound messages through Google.

Notification registration and outbound messages are sent via the app's own UID, so using Droidwall to disable network access by an app is enough to prevent outbound data, and notification registration. However, if you ever allow network access to an app, and it does successfully register for notifications, these notifications can be delivered even when the app is once again blocked from accessing the network by Droidwall.

These inbound notifications can be blocked by disabling network access to the "Google Account Manager, Google Play Services, Google Services Framework, Google Contacts Sync" in Droidwall. In fact, the only reason you should ever need to enable network access by this service is if you need to log in to Google Play again if your authentication tokens ever expire.

If you would like to test your ability to control Google Cloud Messaging, there are two apps in the Google Play store than can help with this. GCM Test allows for simple send and receive pings through GCM. Push Notification Tester will allow you to test registration and asynchronous GCM notification.

Recommended Privacy and Auditing Software

Ok, so now that we have locked down our Android device, now for the fun bit: secure communications!

We recommend the following apps from F-Droid:

  1. Xabber
  2. Xabber is a full Java implementation of XMPP, and supports both OTR and Tor. Its UI is a bit more streamlined than Guardian Project's ChatSecure, and it does not make use of any native code components (which are more vulnerable to code execution exploits than pure Java code). Unfortunately, this means it lacks some of ChatSecure's nicer features, such as push-to-talk voice and file transfer.

    Despite better protection against code execution, it does have several insecure default settings. In particular, you want to make the following changes:

    • Notifications -> Message text in Notifications -> Off (notifications can be read by other apps!)
    • Accounts -> Integration into system accounts -> Off
    • Accounts -> Store message history -> Don't Store
    • Security -> Store History -> Off
    • Security -> Check Server Certificate
    • Chat -> Show Typing Notifications -> Off
    • Connection Settings -> Auto-away -> disabled
    • Connection Settings -> Extended away when idle -> Disabled
    • Keep Wifi Awake -> On
    • Prevent sleep Mode -> On
  3. Offline Calendar
  4. Offline Calendar is a hack to allow you to create a fake local Google account that does not sync to Google. This allows you to use the Calendar App without risk of leaking your activities to Google. Note that you must exempt both this app and Calendar from Privacy Guard for it to function properly.

  5. LinPhone
  6. LinPhone is a FOSS SIP client that supports TCP TLS signaling and ZRTP. Note that neither TLS nor ZRTP are enabled by default. You must manually enable them in Settings -> Network -> Transport and Settings -> Network -> Media Encryption. is a free SIP service run by the Guardian Project that supports only TLS and ZRTP, but does not allow outdialing to normal PSTN telephone numbers. While Bitcoin has many privacy issues of its own, the Bitcoin community maintains a couple lists of "trunking" providers that allow you to obtain a PSTN phone number in exchange for Bitcoin payment.

  7. Plumble

    Plumble is a Mumble client that will route voice traffic over Tor, which is useful if you would like to communicate with someone over voice without revealing your IP to them, or your activity to a local network observer. However, unlike Linphone, voice traffic is not end-to-end encrypted, so the Mumble server can listen to your conversations.

  8. K-9 Mail and APG

    K-9 Mail is a POP/IMAP client that supports TLS and integrates well with APG, which will allow you to send and receive GPG-encrypted mail easily. Before using it, you should be aware of two things: It identifies itself in your mail headers, which opens you up to targeted attacks specifically tailored for K-9 Mail and/or Android, and by default it includes the subject of messages in mail notifications (which is bad, because other apps can read notifications). There is a privacy option to disable subject text in notifications, but there is no option to disable the user agent in the mail headers.

  9. OSMAnd~
  10. A free offline mapping tool. While the UI is a little clunky, it does support voice navigation and driving directions, and is a handy, private alternative to Google Maps.

  11. VLC

    The VLC port in F-Droid is a fully capable media player. It can play mp3s and most video formats in use today. It is a handy, private alternative to Google Music and other closed-source players that often report your activity to third party advertisers. VLC does not need network access to function.

  12. Firefox
  13. We do not yet have a port of Tor Browser for Android (though one is underway -- see the Future Work section). Unless you want to use Google Play to get Chrome, Firefox is your best bet for a web browser that receives regular updates (the built in Browser app does not). HTTPS-Everywhere and NoScript are available, at least.

  14. Bitcoin
  15. Bitcoin might not be the most private currency in the world. In fact, you might even say it's the least private currency in the world. But, it is a neat toy.

  16. Launch App Ops

    The Launch App Ops app is a simple shortcut into the hidden application permissions editor in Android. A similar interface is available through Settings -> Privacy -> Privacy Guard, but a direct shortcut to edit permissions is handy. It also displays some additional system apps that Privacy Guard omits.

  17. Permissions

    The Permissions app gives you a view of all Android permissions, and shows you which apps have requested a given permission. This is particularly useful to disable the record audio permission for apps that you don't want to suddenly decide to listen to you. (Interestingly, the Record Audio permission disable feature was broken in all Android ROMs I tested, aside from Cyanogenmod 11. You can test this yourself by revoking the permission from the Sound Recorder app, and verifying that it cannot record.)

  18. CatLog
  19. In addition to being supercute, CatLog is an excellent Android monitoring and debugging tool. It allows you to monitor and record the full set of Android log events, which can be helpful in diagnosing issues with apps.

  20. OS Monitor
  21. OS Monitor is an excellent Android process and connection monitoring app, that can help you watch for CPU usage and connection attempts by your apps.

  22. Intent Intercept
  23. Intent Intercept allows you to inspect and extract Android Intent content without allowing it to get forwarded to an actual app. This is useful for monitoring how apps attempt to communicate with eachother, though be aware it only covers one of the mechanisms of inter-app communication in Android.

Backing up Your Device Without Google

Now that your device is fully configured and installed, you probably want to know how to back it up without sending all of your private information directly to Google. While the Team Win Recovery Project will back up all of your system settings and apps (even if your device is encrypted), it currently does not back up the contents of your virtualized /sdcard. Remembering to do a couple adb pulls of key directories can save you a lot of heartache should you suffer some kind of data loss or hardware failure (or simply drop your tablet on a bridge while in a rush to catch a train).

The script uses adb to pull your Download and Pictures directories from the /sdcard, as well as pulls the entire TWRP backup directory.

Before you use that script, you probably want to delete old TWRP backup folders so as to only pull one backup, to reduce pull time. These live in /sdcard/TWRP/BACKUPS/, which is also known as /storage/emulated/0/TWRP/BACKUPS in the File Manager app.

To use this script over the network without a usb cable, enable both USB Debugging and ADB Over Network in your developer settings. The script does not require you to enable root access from adb, and you should not enable root because it takes quite a while to run a backup, especially if you are using network adb.

Prior to using network adb, you must edit your Droidwall custom scripts to allow it (by removing the '#' in the #. /data/local/ line you entered earlier), and then run the following commands from a non-root Linux shell on your desktop/laptop (the ADB Over Network setting will tell you the IP and port):

killall adb
adb connect ip:5555

VERY IMPORTANT: Don't forget to disable USB Debugging, as well as the Droidwall adb exemption when you are done with the backup!

Removing the Built-in Microphone

If you would really like to ensure that your device cannot listen to you even if it is exploited, it turns out it is very straight-forward to remove the built-in microphone in the Nexus 7. There is only one mic on the 2013 model, and it is located just below the volume buttons (the tiny hole).

To remove it, all you need to do is pop off the the back panel (this can be done with your fingernails, or a tiny screwdriver), and then you can shave the microphone right off that circuit board, and reattach the panel. I have done this to one of my devices, and it was subsequently unable to record audio at all, without otherwise affecting functionality.

You can still use apps that require a microphone by plugging in headphone headset that contains a mic built in (these cost around $20 and you can get them from nearly any consumer electronics store). I have also tested this, and was still able to make a Linphone call from a device with the built in microphone removed, but with an external headset. Note that the 2012 Nexus 7 does not support these combination microphone+headphone jacks (and it has a secondary microphone as well). You must have the 2013 model.

The 2013 Nexus 7 Teardown video can give you an idea of what this looks like before you try it. Again you do not need to fully disassemble the device - you only need to remove the back cover.

Pro-Tip: Before you go too crazy and start ripping out the cameras too, remember that you can cover the cameras with a sticker or tape when not in use. I have found that regular old black electrical tape applies seamlessly, is non-obvious to casual onlookers, and is easy to remove without smudging or gunking up the lenses. Better still, it can be removed and reapplied many times without losing its adhesive.

Removing the Remnants of the Baseband

There is one more semi-hardware mod you may want to make, though.

It turns out that the 2013 Wifi Nexus 7 does actually have a partition that contains a cell network baseband firmware on it, located on the filesystem as the block device /dev/block/platform/msm_sdcc.1/by-name/radio. If you run strings on that block device from the shell, you can see that all manner of CDMA and GSM log messages, comments, and symbols are present in that partition.

According to ADB logs, Cyanogenmod 11 actually does try to bring up a cell network radio at boot on my Wifi-only Nexus 7, but fails due to it being disabled. There is also a strong economic incentive for Asus and Google to make it extremely difficult to activate the baseband even if the hardware is otherwise identical for manufacturing reasons, since they sell the WiFi-only version for $100 less. If it were easy to re-enable the baseband, HOWTOs would exist (which they do not seem to, at least not yet), and they would cut into their LTE device sales.

Even so, since we lack public schematics for the Nexus 7 to verify that cell components are actually missing or hardware-disabled, it may be wise to wipe this radio firmware as well, as defense in depth.

To do this, open the Terminal app, and run:

cd /dev/block/platform/msm_sdcc.1/by-name
dd if=/dev/zero of=./radio

I have wiped that partition while the device was running without any issue, or any additional errors from ADB logs.

Note that an anonymous commenter also suggested it is possible to disable the baseband of a cell-enabled device using a series of Android service disable commands, and by wiping that radio block device. I have not tested this on a device other than the WiFI-only Nexus 7, though, so proceed with caution. If you try those steps on a cell-enabled device, you should archive a copy of your radio firmware first by doing something like the following from that dev directory that contains the radio firmware block device.

dd if=./radio of=/sdcard/radio.img

If anything goes wrong, you can restore that image with:

dd if=/sdcard/radio.img of=./radio

Future Work

In addition to streamlining the contents of this post into a single additional Cyanogenmod installation zip or alternative ROM, the following problems remain unsolved.

Future Work: Better Usability

While arguably very secure, this system is obviously nowhere near usable. Here are some potential improvements to the user interface, based on a brainstorming session I had with another interested developer.

First of all, the AFWall+/Droidwall UI should be changed to be a tri-state: It should allow you to send app traffic over Tor, over your normal internet connection, or block it entirely.

Next, during app installation from either F-Droid or Google Play (this is an Intent another addon app can actually listen for), the user should be given the chance to decide if they would like that app's traffic to be routed over Tor, use the normal Internet connection, or be blocked entirely from accessing the network. Currently, the Droidwall default for new apps is "no network", which is a great default, but it would be nice to ask users what they would like to do during actual app installation.

Moreover, users should also be given a chance to edit the app's permissions upon installation as well, should they desire to do so.

The Google Play situation could also be vastly improved, should Google itself still prove unwilling to improve the situation. Google Play could be wrapped in a launcher app that automatically grants it network access prior to launch, and then disables it upon leaving the window.

A similar UI could be added to LinPhone. Because the actual voice and video transport for LinPhone does not use Tor, it is possible for an adversary to learn your SIP ID or phone number, and then call you just for the purposes of learning your IP. Because we handle call setup over Tor, we can prevent LinPhone from performing any UDP activity, or divulging your IP to the calling party prior to user approval of the call. Ideally, we would also want to inform the user of the fact that incoming calls can be used to obtain information about them, at least prior to accepting their first call from an unknown party.

Future Work: Find Hardware with Actual Isolated Basebands

Related to usability, it would be nice if we could have a serious community effort to audit the baseband isolation properties of existing cell phones, so we all don't have to carry around these ridiculous battery packs and sketch-ass wifi bridges. There is no engineering reason why this prototype could not be just as secure if it were a single piece of hardware. We just need to find the right hardware.

A random commenter claimed that the Galaxy Nexus might actually have exactly the type of baseband isolation we want, but the comment was from memory, and based on software reverse engineering efforts that were not publicly documented. We need to do better than this.

Future Work: Bug Bounty Program

If there is sufficient interest in this prototype, and/or if it gets transformed into a usable addon package or ROM, we may consider running a bug bounty program where we accept donations to a dedicated Bitcoin address, and award the contents of that wallet to anyone who discovers a Tor proxy bypass issue or remote code execution vulnerability in any of the network-enabled apps mentioned in this post (except for the Browser app, which does not receive security updates).

Future Work: Port Tor Browser to Android

The Guardian Project is undertaking a port of Tor Browser to Android as part of their OrFox project. This will greatly improve the privacy of your web browsing experience on the Android device over both Firefox and Chrome. We look forward to helping them in any way we can with this effort.

Future Work: WiFi MAC Address Randomization

It is actually possible to randomize the WiFi MAC address on the Google Nexus 7. The closed-source root app Mac Spoofer is able to modify the device MAC address using Qualcomm-specific methods in such a way that the entire Android OS becomes convinced that this is your actual MAC.

However, doing this requires installation of a root-enabled, closed-source application from the Google Play Store, which we believe is extremely unwise on a device you need to be able to trust. Moreover, this app cannot be autorun on boot, and your MAC address will also reset every time you disable the WiFi interface (which is easy to do accidentally). It also supports using only a single, manually entered MAC address.

Hardware-independent techniques (such as a the Terminal command busybox ifconfig wlan0 hw ether <mac>) appear to interfere with the WiFi management system and prevent it from associating. Moreover, they do not cause the Android system to report the new MAC address, either (visible under Settings -> About Tablet -> Status).

Obviously, an Open Source F-Droid app that properly resets (and automatically randomizes) the MAC every time the WiFi interface is brought up is badly needed.

Future Work: Disable Probes for Configured Wifi Networks

The Android OS currently probes for all of your configured WiFi networks while looking for open wifi to connect to. Configured networks should not be probed for explictly unless activity for their BSSID is seen. The xda-developers forum has a limited fix to change scanning behavior, but users report that it does not disable the active probing behavior for any "hidden" networks that you have configured.

Future Work: Recovery ROM Password Protection

An unlocked recovery ROM is a huge vulnerability surface for Android. While disk encryption protects your applications and data, it does not protect many key system binaries and boot programs. With physical access, it is possible to modify these binaries through your recovery ROM.

The ability to set a password for the Team Win recovery ROM in such a way that a simple "fastboot flash recovery" would overwrite would go a long way to improving device security. At least it would become evident to you if your recovery ROM has been replaced, in this case (due to the absence of the password).

It may also be possible to restore your bootloader lock as an alternative, but then you lose the ability to make backups of your system using Team Win.

Future Work: Disk Encryption via TPM or Clever Hacks

Unfortunately, even disk encryption and a secure recovery firmware is not enough to fully defend against an adversary with an extended period of physical access to your device.

Cold Boot Attacks are still very much a reality against any form of disk encryption, and the best way to eliminate them is through hardware-assisted secure key storage, such as through a TPM chip on the device itself.

It may also be possible to mitigate these attacks by placing key material in SRAM memory locations that will be overwritten as part of the ARM boot process. If these physical memory locations are stable (and for ARM systems that use the SoC SRAM to boot, they will be), rebooting the device to extract key material will always end up overwriting it. Similar ARM CPU-based encryption defenses have also been explored in the research literature.

Future Work: Download and Build Process Integrity

Beyond the download integrity issues mentioned above, better build security is also deeply needed by all of these projects. A Gitian descriptor that is capable of building Cyanogenmod and arbitrary F-Droid packages in a reproducible fashion is one way to go about achieving this property.

Future Work: Removing Binary Blobs

If you read the Cyanogenmod build instructions closely, you can see that it requires extracting the binary blobs from some random phone, and shipping them out. This is the case with most ROMs. In fact, only the Replicant Project seems concerned with this practice, but regrettably they do not support any wifi-only devices. This is rather unfortunate, because no matter what they do with the Android OS on existing cell-enabled devices, they will always be stuck with a closed source, backdoored baseband that has direct access to the microphone, if not the RAM and the entire Android OS.

Kudos to them for finding one of the backdoors though, at least.

Changes Since Initial Posting

  1. Updated firewall scripts to fix Droidwall permissions vulnerability.
  2. Updated Applications List to recommend VLC as a free media player.
  3. Mention the Guardian Project's planned Tor Browser port (called OrFox) as Future Work.
  4. Mention disabling configured WiFi network auto-probing as Future Work
  5. Updated the firewall install script (and the that contains it) to disable "Captive Portal detection" connections to Google upon WiFi association. These connections are made by the Settings service user, which should normally be blocked unless you are Activating Google Play for the first time.
  6. Updated the Executive Summary section to make it clear that our SIP client can actually also make normal phone calls, too.
  7. Document removing the built-in microphone, for the truly paranoid folk out there.
  8. Document removing the remnants of the baseband, or disabling an existing baseband.
  9. Update SHA256SUM of FDroid.apk for 0.63
  10. Remove multiport usage from script (and update
  11. Add pro-tip to the microphone removal section: Don't remove your cameras. Black electrical tape works just fine, and can be removed and reapplied many times without smudges.
  12. Update installation and documentation to use /data/local instead of /etc. CM updates will wipe /etc, of course. Woops. If this happened to you while updating to CM-11-M5, download that new and run again as per the instructions above, and update your Droidwall custom script locations to use /data/local.
  13. Update the Future work section to describe some specific UI improvements.
  14. Update the Future work section to mention that we need to find hardware with actual isolated basebands. Duh. This should have been in there much earlier.
  15. Update the versions for everything
  16. Suggest enabling disk crypto directly from the shell, to avoid SSD leaks of the originally PIN-encrypted device key material.
  17. GMail network access seems to be required for App Store initialization now. Mention this in Google Apps section.
  18. Mention K-9 Mail, APG, and Plumble in the Recommended Apps section.
  19. Update the Firewall instructions to clarify that you need to ensure there are no typos in the scripts, and actually click the Droidwall UI button to enable the Droidwall firewall (otherwise networking will not work at all due to
  20. Disable NFC in Settings config

Tor Browser Bundle 3.0beta1 Released

The first beta release in the 3.0 series of the Tor Browser Bundle is now available from the Tor Package Archive:

This release includes important security updates to Firefox, as well as a fix for a startup crash bug on Windows XP.

This release also reorganizes the bundle directory structure to simplify implementation of the FIrefox updater in future releases. This means that extracting the bundle over previous installation will likely not preserve your preferences or bookmarks, and may cause other issues.

This release has also introduced a build reproducibility issue on Windows, hence it is signed only by two keys. We should have this issue fixed by the next beta.

Here is the complete ChangeLog:

  • All Platforms:
    • Update Firefox to 17.0.10esr
    • Update NoScript to
    • Update HTTPS-Everywhere to 3.4.2
    • Bug #9114: Reorganize the bundle directory structure to ease future autoupdates
    • Bug #9173: Patch Tor Browser to auto-detect profile directory if launched without the wrapper script.
    • Bug #9012: Hide Tor Browser infobar for missing plugins.
    • Bug #8364: Change the default entry page for the addons tab to the installed addons page.
    • Bug #9867: Make flash objects really be click-to-play if flash is enabled.
    • Bug #8292: Make getFirstPartyURI log+handle errors internally to simplify caller usage of the API
    • Bug #3661: Remove polipo and privoxy from the banned ports list.
    • misc: Fix a potential memory leak in the Image Cache isolation
    • misc: Fix a potential crash if OS theme information is ever absent
    • Update Tor-Launcher to
      • Bug #9114: Handle new directory structure
      • misc: Tor Launcher now supports Thunderbird
    • Update Torbutton to 1.6.4
      • Bug #9224: Support multiple Tor socks ports for about:tor status check
      • Bug #9587: Add TBB version number to about:tor
      • Bug #9144: Workaround to handle missing translation properties
  • Windows:
    • Bug #9084: Fix startup crash on Windows XP.
  • Linux:
    • Bug #9487: Create detached debuginfo files for Linux Tor and Tor Browser binaries.

Deterministic Builds Part Two: Technical Details

This is the second post in a two-part series on the build security improvements in the Tor Browser Bundle 3.0 release cycle.

The first post described why such security is necessary. This post is meant to describe the technical details with respect to how such builds are produced.

We achieve our build security through a reproducible build process that enables anyone to produce byte-for-byte identical binaries to the ones we release. Elsewhere on the Internet, this process is varyingly called "deterministic builds", "reproducible builds", "idempotent builds", and probably a few other terms, too.

To produce byte-for-byte identical packages, we use Gitian to build Tor Browser Bundle 3.0 and above, but that isn't the only option for achieving reproducible builds. We will first describe how we use Gitian, and then go on to enumerate the individual issues that Gitian solves for us, and that we had to solve ourselves through either wrapper scripts, hacks, build process patches, and (in one esoteric case for Windows) direct binary patching.

Gitian: What is it?

Gitian is a thin wrapper around the Ubuntu virtualization tools written in a combination of Ruby and bash. It was originally developed by Bitcoin developers to ensure the build security and integrity of the Bitcoin software.

Gitian uses Ubuntu's python-vmbuilder to create a qcow2 base image for an Ubuntu version and architecture combination and a set of git and tarball inputs that you specify in a 'descriptor', and then proceeds to run a shell script that you provide to build a component inside that controlled environment. This build process produces an output set that includes the compiled result and another "output descriptor" that captures the versions and hashes of all packages present on the machine during compilation.

Gitian requires either Intel VT support (for qemu-kvm), or LXC support, and currently only supports launching Ubuntu build environments from Ubuntu itself.

Gitian: How Tor Uses It

Tor's use of Gitian is slightly more automated and also slightly different than how Bitcoin uses it.

First of all, because Gitian supports only Ubuntu hosts and targets, we must cross compile everything for Windows and MacOS. Luckily, Mozilla provides support for MinGW-w64 as a "third tier" compiler, and does endeavor to work with the MinGW team to fix issues as they arise.

To our further good fortune, we were able to use a MacOS cross compiler created by Ray Donnelly based on a fork of "Toolchain4". We owe Ray a great deal for providing his compilers to the public, and he has been most excellent in terms of helping us through any issues we encountered with them. Ray is also working to merge his patches into the crosstools-ng project, to provide a more seamless build process to create rebuilds of his compilers. As of this writing, we are still using his binaries in combination with the flosoft MacOS10.X SDK.

For each platform, we build the components of Tor Browser Bundle in 3 stages, with one descriptor per stage. The first descriptor builds Tor and its core dependency libraries (OpenSSL, libevent, and zlib) and produces one output zip file. The second descriptor builds Firefox. The third descriptor combines the previous two outputs along with our included Firefox addons and localization files to produce the actual localized bundle files.

We provide a Makefile and shellscript-based wrapper around Gitian to automate the download and authentication of our source inputs prior to build, and to perform a final step that creates a sha256sums.txt file that lists all of the bundle hashes, and can be signed by any number of detached signatures, one for each builder.

It is important to distribute multiple cryptographic signatures to prevent targeted attacks against stealing a single build signing key (because the signing key itself is of course another single point of failure). Unfortunately, GPG currently lacks support for verifying multiple signatures of the same document. Users must manually copy each detached signature they wish to verify into its proper .asc filename suffix. Eventually, we hope to create a stub installer or wrapper script of some kind to simplify this step, as well as add multi-signature support to the Firefox update process. We are also investigating adding a URL and a hash of the package list the Tor Consensus, so that the Tor Consensus document itself authenticates our binary packages.

We do not use the Gitian output descriptors or the Gitian signing tools, because the tools are used to sign Gitian's output descriptors. We found that Ubuntu's individual packages (which are listed in the output descriptors) varied too frequently to allow this mechanism to be reproducible for very long. However, we do include a list of input versions and hashes used to produce each bundle in the bundle itself. The format of this versions file is the same that we use as input to download the sources. This means it should remain possible to re-build an arbitrary bundle for verification at a later date, assuming that any later updates to Ubuntu's toolchain packages do not change the output.

Gitian: Pain Points

Gitian is not perfect. In fact, many who have tried our build system have remarked that it is not even close to deterministic (and that for this and other reasons 'Reproducible Builds' is a better term). In fact, it seems to experience build failures for quite unpredictible reasons related to bugs in one or more of qemu-kvm/LXC, make, qcow copy-on-write image support. These bugs are often intermittent, and simply restarting the build process often causes things to proceed smoothly. This has made the bugs exceedingly tricky to pinpoint and diagnose.

Gitian's use of tags (especially signed tags) has some bugs and flaws. For this reason, we verify signatures ourselves after input fetching, and provide gitian only with explicit commit hashes for the input source repositories.

We maintain a list of the most common issues in the build instructions.

Remaining Build Reproducibility Issues

By default, the Gitian VM environment controls the following aspects of the build platform that normally vary and often leak into compiled software: hostname, build path, uname output, toolchain version, and time.

However, Gitian is not enough by itself to magically produce reproducible builds. Beyond what Gitian provides, we had to patch a number of reproducibility issues in Firefox and some of the supporting tools on each platform. These include:

  1. Reordering due to inode ordering differences (exposed via Python's os.walk())

    Several places in the Firefox build process use python scripts to repackage both compiled library archives and zip files. In particular, they tend to obtain directory listings using os.walk(), which is dependent upon the inode ordering of the filesystem. We fix this by sorting those file lists in the applicable places.

  2. LC_ALL localizations alter sorting order

    Sorting only gets you so far, though, if someone from a different locale is trying to reproduce your build. Differences in your character sets will cause these sort orders to differ. For this reason, we set the LC_ALL environment variable to 'C' at the top of our Gitian descriptors.

  3. Hostname and other OS info leaks in LXC mode

    For these cases, we simply patch the pieces of Firefox that include the hostname (primarily for about:buildconfig).

  4. Millisecond and below timestamps are not fixed by libfaketime

    Gitian relies on libfaketime to set the clock to a fixed value to deal with embedded timestamps in archives and in the build process. However, in some places, Firefox inserts millisecond timestamps into its supporting libraries as part of an informational structure. We simply zero these fields.

  5. FIPS-140 mode generates throwaway signing keys

    A rather insane subsection of the FIPS-140 certification standard requires that you distribute signatures for all of your cryptographic libraries. The Firefox build process meets this requirement by generating a temporary key, using it to sign the libraries, and discarding the private portion of that key. Because there are many other ways to intercept the crypto outside of modifying the actual DLL images, we opted to simply remove these signature files from distribution. There simply is no way to verify code integrity on a running system without both OS and coprocessor assistance. Download package signatures make sense of course, but we handle those another way (as mentioned above).

  6. On Windows builds, something mysterious causes 3 bytes to randomly vary
    in the binary.

    Unable to determine the source of this, we just bitstomp the binary and regenerate the PE header checksums using strip... Seems fine so far! ;)

  7. umask leaks into LXC mode in some cases

    We fix this by manually setting umask at the top of our Gitian descriptors. Additionally, we found that we had to reset the permissions inside of tar and zip files, as the umask didn't affect them on some builds (but not others...)

  8. Zip and Tar reordering and attribute issues

    To aid with this and other issues with reproducibility, we created simple shell wrappers for zip and tar to eliminate the sources of non-determinism.

  9. Timezone leaks

    To deal with these, we set TZ=UTC at the top of our descriptors.

Future Work

The most common question we've been asked about this build process is: What can be done to prevent the adversary from compromising the (substantially weaker) Ubuntu build and packaging processes, and further, what about the Trusting Trust attack?

In terms of eliminating the remaining single points of compromise, the first order of business is to build all of our compilers and toolchain directly from sources via their own Gitian descriptors.

Once this is accomplished, we can begin the process of building identical binaries from multiple different Linux distributions. This would require the adversary to compromise multiple Linux distributions in order to compromise the Tor software distribution.

If we can support reproducible builds through cross compiling from multiple architectures (Intel, ARM, MIPS, PowerPC, etc), this also reduces the likelihood of a Trusting Trust attack surviving unnoticed in the toolchain (because the machine code that injects the payload would have to be pre-compiled and present for all copies of the cross-compiled executable code in a way that is still not visible in the sources).

If those Linux distributions also support reproducible builds of the full build and toolchain environment (both Debian and Fedora have started this), we can eliminate Trusting Trust attacks entirely by using Diverse Double Compilation between multiple independent distribution toolchains, and/or assembly audited compilers. In other words, we could use the distributions' deterministic build processes to verify that identical build environments are produced through Diverse Double Compilation.

As can be seen, much work remains before the system is fully resistant against all forms of malware injection, but even in their current state, reproducible builds are a huge step forward in software build security. We hope this information helps other software distributors to follow the example set by Bitcoin and Tor.

Tor Browser Bundle 3.0alpha4 Released

The third alpha release in the 3.0 series of the Tor Browser Bundle is now available from the Tor Package Archive:

This release includes important security updates to Firefox. Here is the complete ChangeLog:

  • All Platforms:
    • Bug #8751: Randomize TLS HELLO timestamp in HTTPS connections
    • Bug #9790 (workaround): Temporarily re-enable JS-Ctypes for cache
      isolation and SSL Observatory

    • Update Firefox to 17.0.9esr
    • Update Tor to
    • Update NoScript to
    • Update Tor-Launcher to 0.2.2-alpha
      • Bug #9675: Provide feedback mechanism for clock-skew and other early
        startup issues

      • Bug #9445: Allow user to enter bridges with or without 'bridge' keyword
      • Bug #9593: Use UTF16 for Tor process launch to handle unicode paths.
      • misc: Detect when Tor exits and display appropriate notification
    • Update Torbutton to
      • Bug 9492: Fix Torbutton logo on OSX and Windows (and related
        initialization code)

      • Bug 8839: Disable Google/Startpage search filters using Tor-specific urls

    As usual these binaries should be exactly reproducible by anyone with Ubuntu and KVM support. To build your own identical copies of these bundles from source code, check out the official repository and use git tag tbb-3.0alpha4-build1 (commit d1fad5a54345d9dad8f8997f2f956d3f4fdeb0f4).

    These instructions should explain things from there. If you notice any differences from the official bundles, I would love to hear about it!

Deterministic Builds Part One: Cyberwar and Global Compromise

I've spent the past few months developing a new build system for the 3.0 series of the Tor Browser Bundle that produces what are called "deterministic builds" -- packages which are byte-for-byte identical no matter who actually builds them, or what hardware they use. This effort was extraordinarily involved, consuming all of my development time for over two months (including several nights and weekends), babysitting builds and fixing differences and issues that arose.

When describing my recent efforts to others, by far the two most common questions I've heard are "Why did you do that?" and "How did you do that?". I've decided to answer each question at length in a separate blog post. This blog post attempts to answer the first question: "Why would anyone want a deterministic build process?"

The short answer is: to protect against targeted attacks. Current popular software development practices simply cannot survive targeted attacks of the scale and scope that we are seeing today. In fact, I believe we're just about to witness the first examples of large scale "watering hole" attacks. This would be malware that attacks the software development and build processes themselves to distribute copies of itself to tens or even hundreds of millions of machines in a single, officially signed, instantaneous update. Deterministic, distributed builds are perhaps the only way we can reliably prevent these types of targeted attacks in the face of the endless stockpiling of weaponized exploits and other "cyberweapons".

The Dangerous Pursuit of "Cyberweapons" and "Cyberwar"

For the past several years, we've been seeing a steady increase in the weaponization, stockpiling, and the use of software exploits by multiple governments, and by multiple agencies of multiple governments. It would seem that no networked computer is safe from a discovered but undisclosed and already weaponized vulnerability against one or more of its software components -- with each vulnerability being resold an unknown number of times.

Worse still, with Stuxnet and Flame, this stockpile has grown to include weaponized exploits specifically designed to "bridge the air gap" against even non-networked computers. Examples include exploits against software/hardware USB stacks, filesystem drivers, hard drive firmware, and even disconnected Bluetooth and Wifi interfaces. Even if these exploits themselves don't leak, the fact that they are known to exist (and are known to be deliberately kept secret from manufacturers and developers) means that other parties can begin looking for (or simply re-purchasing) the underlying vulnerabilities themselves, without fear of their disclosure or mitigation.

Unfortunately, the use of such exploits isn't limited to attacks against questionable nuclear energy programs by hostile states. The clock is certainly ticking on how long it will be before multiple other intelligence agencies, along with elements of organized crime and "terrorist" groups, have replicated these weapons.

We are essentially risking all of computing (or at least major sectors of the world economy that are dependent on specific software systems) by stockpiling these weapons, as if there would be any possibility of retaliation after a serious cyberattack. Wakeup call: There is not. In fact, the more exploits exist, the higher the risk of the wrong one leaking -- and it really only takes a chain of just a few of the right exploits for this to happen.

Software Engineering Complexity: The Doomsday Scenario

The core problem is this: With the number of dependencies present in large software projects, there is no way any amount of global surveillance, network censorship, machine isolation, or firewalling can sufficiently protect the software development process of widely deployed software projects in order to prevent scenarios where malware sneaks into a development dependency through an exploit in combination with code injection, and makes its way into the build process of software that is critical to the function of the world economy.

Such malware could be quite simple: One day, a timer goes off, and any computer running the infected software turns into a brick. In fact, it's not that hard to destroy a computer via software. Linux distributions have been accidentally tripping on bugs that do it for two decades now. If the right software vector is chosen (for example, a popular piece of software with a rapid release cycle and an auto-updater), a logic bomb that infects the build systems could continuously update the timestamps in the distributed versions of itself to ensure that the infected computers are only destroyed in the event that the attacker actually loses control of the software build infrastructure. If the right systems are chosen, this destruction could mean the disruption of all industrial control or supply chain systems simultaneously, disabling the ability to provide food, water, power, and aid to hundreds of millions of people in a very short amount of time.

The malware could also be more elaborate, especially if the motives are financial as opposed to purely destructive. The ability to universally deploy a backdoor that would allow modification of various aspects of financial transaction processing, stock markets, insurance records, and the supply chain records of various industries would prove tremendously profitable in the right circumstances. Just about all aspects of business are computerized now, and if the computer systems say an event did or didn't happen, that is the reality. Even short of modification, early access to information about certain events is also valuable -- unreleased earnings data from publicly traded companies being the immediate example.

In this brave new world, without the benefit of anonymity and decentralization to protect single points of failure in the software engineering process from such targeted attacks, I don't believe it is possible to keep software signing keys secure any more, nor do I believe it is possible to keep even an offline build machine secure from malware injection any more, especially against the types of adversaries that Tor has to contend with.

As someone who regularly discusses software engineering practices with the best and the brightest minds in the computer industry, I can tell you with certainty that even companies that exercise current best practices -- such as keeping their software build machines offline (and even these companies are few and far between) can still end up being infected, due to the existence and proliferation of the air gap bridging exploits mentioned above.

A true air gap is also difficult to achieve even if it could be used to ensure build machine integrity. For example, all of the major Windows web browser vendors employ a Microsoft run-time optimization technique called "Profile Guided Optimization". This technique requires running an initial compiled binary on a machine to produce a profile output that represents which code paths were executed and which were most expensive. This output is used to transform its code and optimize it further. In the case of browsers, this means that an untrusted, proprietary, and opaque input is derived from non-deterministic network sources (such as the Alexa Top 1000) and transferred to the build machines, to produce executable code that is manipulated and rewritten based on this network-derived, untrusted input, and upon the performance and other characteristics of the specific machine that was used to generate this profile output.

This means that software development has no choice but to evolve beyond the simple models of "Trust our RSA-signed update feed produced from our trusted build machines", or even companies like Google, Mozilla, Apple, and Microsoft are going to end up distributing state-sponsored malware in short order.

Deterministic Builds: Integrity through Decentralization

This is where the "why" of deterministic builds finally comes in: in our case, any individual can use our anonymity network to privately download our source code, verify it against public signed, audited, and mirrored git repositories, and reproduce our builds exactly, without being subject to such targeted attacks. If they notice any differences, they can alert the public builders/signers, hopefully using a pseudonym or our anonymous trac account.

This also will eventually allow us to create a number of auxiliary authentication mechanisms for our packages, beyond just trusting a single offline build machine and a single cryptographic key's integrity. Interesting examples include providing multiple independent cryptographic signatures for packages, listing the package hashes in the Tor consensus, and encoding the package hashes in the Bitcoin blockchain.

I believe it is important for Tor to set an example on this point, and I hope that the Linux distributions will follow in making deterministic packaging the norm. Thankfully, due to our close relationship with Debian, after we whispered in a few of the right ears they have started work on this effort. Don't despair guys: it won't take two months for each Linux package. In our case, we had to cross-compile Firefox deterministically for four different target operating system and architecture combinations and fix a number of Firefox-specific build issues, all of which I will describe in the second post on the technical details.

Tor Browser Bundle 3.0alpha3 Released

The third alpha release in the 3.0 series of the Tor Browser Bundle is now available from the Tor Package Archive:

This release includes important security updates to Firefox. Here is the complete ChangeLog:

  • All Platforms:
    • Update Firefox to 17.0.8esr

    • Update Tor to
    • Update HTTPS-Everywhere to 3.3.1
    • Update NoScript to
    • Improve build input fetching and authentication
    • Bug #9283: Update NoScript prefs for usability.
    • Bug #6152 (partial): Disable JSCtypes support at compile time
    • Update Torbutton to 1.6.1
      • Bug 8478: Change when window resize code fires to avoid rounding errors
      • Bug 9331: Hack a correct download URL for the next TBB release
      • Bug 9144: Change an aboutTor.dtd string so transifex will accept it
    • Update Tor-Launcher to 0.2.1-alpha
      • Bug #9128: Remove dependency on JSCtypes
  • Windows:
    • Bug #9195: Disable download manager AV scanning (to prevent cloud
      reporting+scanning of downloaded files)
  • Mac:
    • Bug #9173 (partial): Launch firefox-bin on MacOS instead of
      (improves dock behavior).

As usual these binaries should be exactly reproducible by anyone with Ubuntu and KVM support (though there are some issues in LXC).
To build your own identical copies of these bundles from source code, check out the official repository and use git tag tbb-3.0alpha3-release (commit 49db54d147bd0bccc26f1d4f859cf9fe97e5f14c).

These instructions should explain things from there. If you notice any differences from the official bundles, I would love to hear about it!

Tor security advisory: Old Tor Browser Bundles vulnerable

An attack that exploits a Firefox vulnerability in JavaScript has been observed in the wild. Specifically, Windows users using the Tor Browser Bundle (which includes Firefox plus privacy patches) appear to have been targeted.

This vulnerability was fixed in Firefox 17.0.7 ESR. The following versions of the Tor Browser Bundle include this fixed version:

Tor Browser Bundle users should ensure they're running a recent enough bundle version, and consider taking further security precautions.

Read the full advisory here:

Syndicate content Syndicate content