security

Mission Improbable: Hardening Android for Security And Privacy

Updates: See the Changes section for a list of changes since initial posting.

After a long wait, the Tor project is happy to announce a refresh of our Tor-enabled Android phone prototype.

This prototype is meant to show a possible direction for Tor on mobile. While I use it myself for my personal communications, it has some rough edges, and installation and update will require familiarity with Linux.

The prototype is also meant to show that it is still possible to replace and modify your mobile phone's operating system while retaining verified boot security - though only just barely. The Android ecosystem is moving very fast, and in this rapid development, we are concerned that the freedom of users to use, study, share, and improve the operating system software on their phones is being threatened. If we lose these freedoms on mobile, we may never get them back. This is especially troubling as mobile access to the Internet becomes the primary form of Internet usage worldwide.

Quick Recap

We are trying to demonstrate that it is possible to build a phone that respects user choice and freedom, vastly reduces vulnerability surface, and sets a direction for the ecosystem with respect to how to meet the needs of high-security users. Obviously this is a large task. Just as with our earlier prototype, we are relying on suggestions and support from the wider community.

Help from the Community

When we released our first prototype, the Android community exceeded our wildest expectations with respect to their excitement and contributions. The comments on our initial blog post were filled with helpful suggestions.

Soon after that post went up, Cédric Jeanneret took my Droidwall scripts and adapted them into the very nice OrWall, which is exactly how we think a Tor-enabled phone should work in general. Users should have full control over what information applications can access on their phones, including Internet access, and have control over how that Internet access happens. OrWall provides the networking component of this access control. It allows the user to choose which apps route through Tor, which route through non-Tor, and which can't access the Internet at all. It also has an option to let a specific Voice over IP app, like Signal, bypass Tor for the UDP voice data channel, while still sending call setup information over Tor.

At around the time that our blog post went up, the Copperhead project began producing hardened builds of Android. The hardening features make it more difficult to exploit Android vulnerabilities, and also provides WiFi MAC address randomization, so that it is no longer trivial to track devices using this information.

Copperhead is also the only Android ROM that supports verified boot, which prevents exploits from modifying the boot, system, recovery, and vendor device partitions. Coppherhead has also extended this protection by preventing system applications from being overridden by Google Play Store apps, or from writing bytecode to writable partitions (where it could be modified and infected). This makes Copperhead an excellent choice for our base system.

The Copperhead Tor Phone Prototype

Upon the foundation of Copperhead, Orbot, Orwall, F-Droid, and other community contributions, we have built an installation process that installs a new Copperhead phone with Orbot, OrWall, SuperUser, Google Play, and MyAppList with a list of recommended apps from F-Droid.

We require SuperUser and OrWall instead of using the VPN APIs because the Android VPN APIs are still not as reliable as a firewall in terms of preventing leaks. Without a firewall-based solution, the VPN can leak at boot, or if Orbot is killed or crashes. Additionally, DNS leaks outside of Tor still occur with the VPN APIs on some systems.

We provide Google Play primarily because Signal still requires it, but also because some users probably also want apps from the Play Store. You do not need a Google account to use Signal, but then you need to download the Signal android package and sideload it manually (via adb install).

The need to install these components to the system partition means that we must re-sign the Copperhead image and updates if we want to keep the ability to have system integrity from Verified Boot.

Thankfully, the Nexus Devices supported by Copperhead allow the use of user-generated keys. The installation process simply takes a Copperhead image, installs our additional apps, and signs it with the new keys.

Systemic Threats to Software Freedom

Unfortunately, not only is Copperhead the only Android rebuild that supports Verified Boot, but the Google Nexus/Pixel hardware is the only Android hardware that allows the user to install their own keys to retain both the ability to modify the device, as well as have the filesystem security provided by verified boot.

This, combined with Google's increasing hostility towards Android as a fully Open Source platform, as well as the difficulty for external entities to keep up with Android's surprise release and opaque development processes, means that the ability for end-users to use, study, share, and improve the Android system are all in great jeopardy.

This all means that the Android platform is effectively moving to a "Look but don't touch" Shared Source model that Microsoft tried in the early 2000s. However, instead of being explicit about this, Google appears to be doing it surreptitiously. It is a very deeply disturbing trend.

It is unfortunate that Google seems to see locking down Android as the only solution to the fragmentation and resulting insecurity of the Android platform. We believe that more transparent development and release processes, along with deals for longer device firmware support from SoC vendors, would go a long way to ensuring that it is easier for good OEM players to stay up to date. Simply moving more components to Google Play, even though it will keep those components up to date, does not solve the systemic problem that there are still no OEM incentives to update the base system. Users of old AOSP base systems will always be vulnerable to library, daemon, and operating system issues. Simply giving them slightly more up to date apps is a bandaid that both reduces freedom and does not solve the root security problems. Moreover, as more components and apps are moved to closed source versions, Google is reducing its ability to resist the demand that backdoors be introduced. It is much harder to backdoor an open source component (especially with reproducible builds and binary transparency) than a closed source one.

If Google Play is to be used as a source of leverage to solve this problem, a far better approach would be to use it as a pressure point to mandate that OEMs keep their base system updated. If they fail to do so, their users will begin to lose Google Play functionality, with proper warning that notifies them that their vendor is not honoring their support agreement. In a more extreme version, the Android SDK itself could have compiled code that degrades app functionality or disables apps entirely when the base system becomes outdated.

Another option would be to change the license of AOSP itself to require that any parties that distribute binaries of the base system must provide updates to all devices for some minimum period of time. That would create a legal avenue for class-action lawsuits or other legal action against OEMs that make "fire and forget" devices that leave their users vulnerable, and endanger the Internet itself.

While extreme, both of these options would be preferable to completely giving up on free and open computing for the future of the Internet. Google should be competing on overall Google account integration experience, security, app selection, and media store features. They should use their competitive position to encourage/enforce good OEM behavior, not to create barriers and bandaids that end up enabling yet more fragmentation due to out of date (and insecure) devices.

It is for this reason that we believe that projects like Copperhead are incredibly important to support. Once we lose these freedoms on mobile, we may never get them back. It is especially troubling to imagine a future where mobile access to the Internet is the primary form of Internet usage, and for that usage, all users are forced to choose between having either security or freedom.

Hardware Choice

The hardware for this prototype is the Google Nexus 6P. While we would prefer to support lower end models for low income demographics, only the Nexus and Pixel lines support Verified Boot with user-controlled keys. We are not aware of any other models that allow this, but we would love to hear if there are any that do.

In theory, installation should work for any of the devices supported by Copperhead, but updating the device will require the addition of an updater-script and an adaptation of the releasetools.py for that device, to convert the radio and bootloader images to the OTA update format.

If you are not allergic to buying hardware online, we highly recommend that you order them from the Copperhead store. The devices are shipped with tamper-evident security tape, for what it's worth. Otherwise, if you're lucky, you might still be able to find a 6P at your local electronics retail store. Please consider donating to Copperhead anyway. The project is doing everything right, and could use your support.

Hopefully, we can add support for the newer Pixel devices as soon as AOSP (and Copperhead) supports them, too.

Installation

Before you dive in, remember that this is a prototype, and you will need to be familiar with Linux.

With the proper prerequisites, installation should be as simple as checking out the Mission Improbable git repository, and downloading a Copperhead factory image for your device.

The run_all.sh script should walk you through a series of steps, printing out instructions for unlocking the phone and flashing the system. Please read the instructions in the repository for full installation details.

The very first device boot after installation will take a while, so be patient. During this boot, you should note the fingerprint of your key on the yellow boot splash screen. That fingerprint is what authenticates the use of your key and the rest of the boot process.

Once the system is booted, after you have given Google Play Services the Location and Storage permissions (as per the instructions printed by the script), make sure you set the Date and Time accurately, or Orbot will not be able to connect to the Tor Network.

Then, you can start Orbot, and allow F-Droid, Download Manager, the Copperhead updater, Google Play Services (if you want to use Signal), and any other apps you want to access the network.

NOTE: To keep Orbot up to date, you will have to go into F-Droid Repositories option, and click Guardian Project Official Releases.

Installation: F-Droid apps

Once you have networking and F-Droid working, you can use MyAppList to install apps from F-Droid. Our installation provides a list of useful apps for MyAppList. The MyAppsList app will allow you to select the subset you want, and install those apps in succession by invoking F-Droid. Start this process by clicking on the upward arrow at the bottom right of the screen:

Alternately, you can add links to additional F-Droid packages in the apk url list prior to running the installation, and they will be downloaded and installed during run_all.sh.

NOTE: Do not update OrWall past 1.1.0 via F-Droid until issue 121 is fixed, or networking will break.

Installation: Signal

Signal is one of the most useful communications applications to have on your phone. Unfortunately, despite being open source itself, Signal is not included in F-Droid, for historical reasons. Near as we can tell, most of the issues behind the argument have actually been since resolved. Now that Signal is reproducible, we see no reason why it can't be included in some F-Droid repo, if not the F-Droid repo, so long as it is the same Signal with the same key. It is unfortunate to see so much disagreement over this point, though. Even if Signal won't make the criterion for the official F-Droid repo (or wherever that tirefire of a flamewar is at right now), we wish that at the very least it could meet the criterion for an alternate "Non-Free" repo, much like the Debian project provides. Nothing is preventing the redistribution of the official Signal apk.

For now, if you do not wish to use a Google account with Google Play, it is possible to download the Signal apks from one of the apk mirror sites (such as APK4fun, apkdot.com, or apkplz.com). To ensure that you have the official Signal apk, perform the following:

  1. Download the apk.
  2. Unzip the apk with unzip org.thoughtcrime.securesms.apk
  3. Verify that the signing key is the official key with keytool -printcert -file META-INF/CERT.RSA
  4. You should see a line with SHA256: 29:F3:4E:5F:27:F2:11:B4:24:BC:5B:F9:D6:71:62:C0 EA:FB:A2:DA:35:AF:35:C1:64:16:FC:44:62:76:BA:26
  5. Make sure that fingerprint matches (the space was added for formatting).
  6. Verify that the contents of that APK are properly signed by that cert with: jarsigner -verify org.thoughtcrime.securesms.apk. You should see jar verified printed out.


Then, you can install the Signal APK via adb with adb install org.thoughtcrime.securesms.apk. You can verify you're up to date with the version in the app store with ApkTrack.

For voice calls to work, select Signal as the SIP application in OrWall, and allow SIP access.

Updates

Because Verified Boot ensures filesystem integrity at the device block level, and because we modify the root and system filesystems, normal over the air updates will not work. The fact that we use different device keys will prevent the official updates from installing at all, but even if they did, they would remove the installation of Google Play, SuperUser, and the OrWall initial firewall script.

When the phone notifies you of an update, you should instead download the latest Copperhead factory image to the mission-improbable working directory, and use update.sh to convert it into a signed update zip that will get sideloaded and installed by the recovery. You need to have the same keys from the installation in the keys subdirectory.

The update.sh script should walk you through this process.

Updates may also reset the system clock, which must be accurate for Orbot to connect to the Tor network. If this happens, you may need to reset the clock manually under Date and Time Settings

Usage

I use this prototype for all of my personal communications - Email, Signal, XMPP+OTR, Mumble, offline maps and directions in OSMAnd, taking pictures, and reading news and books. I use Intent Intercept to avoid accidentally clicking on links, and to avoid surprising cross-app launching behavior.

For Internet access, I personally use a secondary phone that acts as a router for this phone while it is in airplane mode. That phone has an app store and I use it for less trusted, non-private applications, and for emergency situations should a bug with the device prevent it from functioning properly. However, it is also possible to use a cheap wifi cell router, or simply use the actual cell capabilities on the phone itself. In that case, you may want to look into CSipSimple, and a VoIP provider, but see the Future Work section about potential snags with using SIP and Signal at the same time.

I also often use Google Voice or SIP numbers instead of the number of my actual phone's SIM card just as a general protection measure. I give people this number instead of the phone number of my actual cell device, to prevent remote baseband exploits and other location tracking attacks from being trivial to pull off from a distance. This is a trade-off, though, as you are trusting the VoIP provider with your voice data, and on top of this, many of them do not support encryption for call signaling or voice data, and fewer still support SMS.

For situations where using the cell network at all is either undesirable or impossible (perhaps because it is disabled due to civil unrest), the mesh network messaging app Rumble shows a lot of promise. It supports both public and encrypted groups in a Twitter-like interface run over either a wifi or bluetooth ad-hoc mesh network. It could use some attention.

Future Work

Like the last post on the topic, this prototype obviously has a lot of unfinished pieces and unpolished corners. We've made a lot of progress as a community on many of the future work items from that last post, but many still remain.

Future work: More Device Support

As mentioned above, installation should work on all devices that Copperhead supports out of the box. However, updates require the addition of an updater-script and an adaptation of the releasetools.py for that device, to convert the radio and bootloader images to the OTA update format.

Future Work: MicroG support

Instead of Google Play Services, it might be nice to provide the Open Source MicroG replacements. This requires some hackery to spoof the Google Play Service Signature field, though. Unfortunately, this method creates a permission that any app can request to spoof signatures for any service. We'd be much happier about this if we could find a way for MicroG to be the only app to be able to spoof permissions, and only for the Google services it was replacing. This may be as simple as hardcoding those app ids in an updated version of one of these patches.

Future Work: Netfilter API (or better VPN APIs)

Back in the WhisperCore days, Moxie wrote a Netfilter module using libiptc that enabled apps to edit iptables rules if they had permissions for it. This would eliminate the need for iptables shell callouts for using OrWall, would be more stable and less leaky than the current VPN APIs, and would eliminate the need to have root access on the device (which is additional vulnerability surface). That API needs to be dusted off and updated for the Copperhead compatibility, and then Orwall would need to be updated to use it, if present.

Alternatively, the VPN API could be used, if there were ways to prevent leaks at boot, DNS leaks, and leaks if the app is killed or crashes. We'd also want the ability to control specific app network access, and allow bypass of UDP for VoIP apps.

Future Work: Fewer Binary Blobs

There are unfortunately quite a few binary blobs extracted from the Copperhead build tree in the repository. They are enumerated in the README. This was done for expedience. Building some of those components outside of the android build tree is fairly difficult. We would happily accept patches for this, or for replacement tools.

Future Work: F-Droid auto-updates, crash reporting, and install count analytics

These requests come from Moxie. Having these would make him much happier about F-Droid Signal installs.

It turns out that F-Droid supports full auto-updates with the Priviledged Extension, which Copperhead is working on including.

Future Work: Build Reproducibility

Copperhead itself is not yet built reproducibly. It's our opinion that this is the AOSP's responsibility, though. If it's not the core team at Google, they should at least fund Copperhead or some other entity to work on it for them. Reproducible builds should be an organizational priority for all software companies. Moreover, in combination with free software, they are an excellent deterrent against backdoors.

In this brave new world, even if we can trust that the NSA won't be ordered to attack American companies to insert backdoors, deteriorating relationships with China and other state actors may mean that their incentives to hold back on such attacks will be greatly reduced. Closed source components can also benefit from reproducible builds, since compromising multiple build systems/build teams is inherently harder than compromising just one.

Future Work: Orbot Stability

Unfortunately, the stability of Orbot itself still leaves a lot to be desired. It is fairly fragile to network disconnects. It often becomes stuck in states that require you to go into the Android Settings for Apps, and then Force Stop Orbot in order for it to be able to reconnect properly. The startup UI is also fragile to network connectivity.

Worse: If you tap the start button either too hard or multiple times while the network is disconnected or while the phone's clock is out of sync, Orbot can become confused and say that it is connected when it is not. Luckily, because the Tor network access security is enforce by Orwall (and the Android kernel), instabilities in Orbot do not risk Tor leaks.

Future Work: Backups and Remote Wipe

Unfortunately, backups are an unsolved problem. In theory, adb backup -all should work, but even the latest adb version from the official Android SDK appears to only backup and restore partial data. Apparently this is due to adb obeying manifest restrictions on apps that request not to be backed up. For the purposes of full device backup, it would be nice to have an adb version that really backed up everything.

Instead, I use the export feature of K-9 Mail, Contacts, and the Calendar Import-Export app to export that data to /sdcard, and then adb pull /sdcard. It would be nice to have an end-to-end encrypted remote backup app, though. Flock had promise, but was unfortunately discontinued.

Similarly, if a phone is lost, it would be nice to have a cryptographically secure remote wipe feature.

Future Work: Baseband Analysis (and Isolation)

Until phones with auditable baseband isolation are available (the Neo900 looks like a promising candidate), the baseband remains a problem on all of these phones. It is unknown if vulnerabilities or backdoors in the baseband can turn on the mic, make silent calls, or access device memory. Using a portable hotspot or secondary insecure phone is one option for now, but it is still unknown if the baseband is fully disabled in airplane mode. In the previous post, commenters recommended wiping the baseband, but on most phones, this seems to also disable GPS.

It would be useful to audit whether airplane mode fully disables the baseband using either OpenBTS, OsmocommBB, or a custom hardware monitoring device.

Future Work: Wifi AP Scanning Prevention

Copperhead may randomize the MAC address, but it is quite likely that it still tries to connect to configured APs, even if they are not there (see these two XDA threads). This can reveal information about your home and work networks, and any other networks you have configured.

There is a Wifi Privacy Police App in F-Droid, and Smarter WiFi may be other options, but we have not yet had time to audit/test either. Any reports would be useful here.

Future Work: Port Tor Browser to Android

The Guardian Project is undertaking a port of Tor Browser to Android as part of their OrFox project. This port is still incomplete, however. The Tor Project is working on obtaining funding to bring it on par with the desktop Tor Browser.

Future Work: Better SIP Support

Right now, it is difficult to use two or more SIP clients in OrWall. You basically have to switch between them in the settings, which is also fragile and error prone. It would be ideal if OrWall allowed multiple SIP apps to be selected.

Additionally, SIP providers and SIP clients have very poor support for TLS and SRTP encryption for call setup and voice data. I could find only two such providers that advertised this support, but I was unable to actually get TLS and SRTP working with CSipSimple or LinPhone for either of them.

Future Work: Installation and full OTA updates without Linux

In order for this to become a real end-user phone, we need to remove the requirement to use Linux in order to install and update it. Unfortunately, this is tricky. Technically, Google Play can't be distributed in a full Android firmware, so we'd have to get special approval for that. Alternatively, we could make the default install use MicroG, as above. In either case, it should just be a matter of taking the official Copperhead builds, modifying them, changing the update URL, and shipping those devices with Google Play/MicroG and the new OTA location. Copperhead or Tor could easily support multiple device install configurations this way without needing to rebuild everything for each one. So legal issues aside, users could easily have their choice of MicroG, Google Play, or neither.

Personally, I think the demand is higher for some level of Google account integration functionality than what MicroG provides, so it would be nice to find some way to make that work. But there are solid reasons for avoiding the use of a Google account (such as Google's mistreatment of Tor users, the unavailability of Google in certain areas of the world due to censorship of Google, and the technical capability of Google Play to send targeted backdoored versions of apps to specific accounts).

Future Work: Better Boot Key Representation/Authentication

The truncated fingerprint is not the best way to present a key to the user. It is both too short for security, and too hard to read. It would be better to use something like the SSH Randomart representation, or some other visual representation that encodes a cryptographically strong version of the key fingerprint, and asks the user to click through it to boot. Though obviously, if this boot process can also be modified, this may be insufficient.

Future Work: Faster GPS Lock

The GPS on these devices is device-only by default, which can mean it is very slow. It would be useful to find out if µg UnifiedNlp can help, and which of its backends are privacy preserving enough to recommend/enable by default.

Future Work: Sensor Management/Removal

As pointed out in great detail in one of the comments below, these devices have a large number of sensors on them that can be used to create side channels, gather information about the environment, and send it back. The original Mission Impossible post went into quite a bit of detail about how to remove the microphone from the device. This time around, I focused on software security. But like the commentor suggested, you can still go down the hardware modding rabbithole if you like. Just search YouTube for teardown nexus 6P, or similar.


Changes Since Initial Posting

Like the last post, this post will likely be updated for a while based on community feedback. Here is the list of those changes so far.

  1. Added information about secondary SIP/VoIP usage in the Usage section and the Future Work sections.
  2. Added a warning not to upgrade OrWall until Issue 121 is fixed.
  3. Describe how we could remove the Linux requirement and have OTA updates, as a Future Work item.
  4. Remind users to check their key fingerprint at installation and boot, and point out in the Future Work section that this UI could be better.
  5. Mention the Neo900 in the Future Work: Baseband Isolation section
  6. Wow, the Signal vs F-Droid issue is a stupid hot mess. Can't we all just get along and share the software? Don't make me sing the RMS song, people... I'll do it...
  7. Added a note that you need the Guardian Project F-Droid repo to update Orbot.
  8. Add a thought to the Systemic Threats to Software Freedom section about using licensing to enforce the update requirement in order to use the AOSP.
  9. Mention ApkTrack for monitoring for Signal updates, and Intent Intercept for avoiding risky clicks.
  10. Mention alternate location providers as Future Work, and that we need to pick a decent backend.
  11. Link to Conversations and some other apps in the usage section. Also add some other links here and there.
  12. Mention that Date and Time must be set correctly for Orbot to connect to the network.
  13. Added a link to Moxie's netfilter code to the Future Work section, should anyone want to try to dust it off and get it working with Orwall.
  14. Use keytool instead of sha256sum to verify the Signal key's fingerprint. The CERT.RSA file is not stable across versions.
  15. The latest Orbot 15.2.0-rc8 still has issues claiming that it is connected when it is not. This is easiest to observe if the system clock is wrong, but it can also happen on network disconnects.
  16. Add a Future Work section for sensor management/removal

The Trouble with CloudFlare

Wednesday, CloudFlare blogged that 94% of the requests it sees from Tor are "malicious." We find that unlikely, and we've asked CloudFlare to provide justification to back up this claim. We suspect this figure is based on a flawed methodology by which CloudFlare labels all traffic from an IP address that has ever sent spam as "malicious." Tor IP addresses are conduits for millions of people who are then blocked from reaching websites under CloudFlare's system.

We're interested in hearing CloudFlare's explanation of how they arrived at the 94% figure and why they choose to block so much legitimate Tor traffic. While we wait to hear from CloudFlare, here's what we know:

1) CloudFlare uses an IP reputation system to assign scores to IP addresses that generate malicious traffic. In their blog post, they mentioned obtaining data from Project Honey Pot, in addition to their own systems. Project Honey Pot has an IP reputation system that causes IP addresses to be labeled as "malicious" if they ever send spam to a select set of diagnostic machines that are not normally in use. CloudFlare has not described the nature of the IP reputation systems they use in any detail.

2) External research has found that CloudFlare blocks at least 80% of Tor IP addresses, and this number has been steadily increasing over time.

3) That same study found that it typically took 30 days for an event to happen that caused a Tor IP address to acquire a bad reputation and become blocked, but once it happens, innocent users continued to be punished for it for the duration of the study.

4) That study also showed a disturbing increase over time in how many IP addresses CloudFlare blocked without removal. CloudFlare's approach to blocking abusive traffic is incurring a large amount of false positives in the form of impeding normal traffic, thereby damaging the experience of many innocent Tor and non-Tor Internet users, as well as impacting the revenue streams of CloudFlare's own customers by causing frustrated or blocked users to go elsewhere.

5) A report by CloudFlare competitor Akamai found that the percentage of legitimate e-commerce traffic originating from Tor IP addresses is nearly identical to that originating from the Internet at large. (Specifically, Akamai found that the "conversion rate" of Tor IP addresses clicking on ads and performing commercial activity was "virtually equal" to that of non-Tor IP addresses).

CloudFlare disagrees with our use of the word "block" when describing its treatment of Tor traffic, but that's exactly what their system ultimately does in many cases. Users are either blocked outright with CAPTCHA server failure messages, or prevented from reaching websites with a long (and sometimes endless) loop of CAPTCHAs, many of which require the user to understand English in order to solve correctly. For users in developing nations who pay for Internet service by the minute, the problem is even worse as the CAPTCHAs load slowly and users may have to solve dozens each day with no guarantee of reaching a particular site. Rather than waste their limited Internet time, such users will either navigate away, or choose not to use Tor and put themselves at risk.

Also see our new fact sheet about CloudFlare and Tor: https://people.torproject.org/~lunar/20160331-CloudFlare_Fact_Sheet.pdf

A Statement from The Tor Project on Software Integrity and Apple

The Tor Project exists to provide privacy and anonymity for millions of people, including human rights defenders across the globe whose lives depend on it. The strong encryption built into our software is essential for their safety.

In an age when people have so little control over the information recorded about their lives, we believe that privacy is worth fighting for.

We therefore stand with Apple to defend strong encryption and to oppose government pressure to weaken it. We will never backdoor our software.

Our users face very serious threats. These users include bloggers reporting on drug violence in Latin America; dissidents in China, Russia, and the Middle East; police and military officers who use our software to keep themselves safe on the job; and LGBTI individuals who face persecution nearly everywhere. Even in Western societies, studies demonstrate that intelligence agencies such as the NSA are chilling dissent and silencing political discourse merely through the threat of pervasive surveillance.

For all of our users, their privacy is their security. And for all of them, that privacy depends upon the integrity of our software, and on strong cryptography. Any weakness introduced to help a particular government would inevitably be discovered and could be used against all of our users.

The Tor Project employs several mechanisms to ensure the security and integrity of our software. Our primary product, the Tor Browser, is fully open source. Moreover, anyone can obtain our source code and produce bit-for-bit identical copies of the programs we distribute using Reproducible Builds, eliminating the possibility of single points of compromise or coercion in our software build process. The Tor Browser downloads its software updates anonymously using the Tor network, and update requests contain no identifying information that could be used to deliver targeted malicious updates to specific users. These requests also use HTTPS encryption and pinned HTTPS certificates (a security mechanism that allows HTTPS websites to resist being impersonated by an attacker by specifying exact cryptographic keys for sites). Finally, the updates themselves are also protected by strong cryptography, in the form of package-level cryptographic signatures (the Tor Project signs the update files themselves). This use of multiple independent cryptographic mechanisms and independent keys reduces the risk of single points of failure.

The Tor Project has never received a legal demand to place a backdoor in its programs or source code, nor have we received any requests to hand over cryptographic signing material. This isn't surprising: we've been public about our "no backdoors, ever" stance, we've had clear public support from our friends at EFF and ACLU, and it's well-known that our open source engineering processes and distributed architecture make it hard to add a backdoor quietly.

From an engineering perspective, our code review and open source development processes make it likely that such a backdoor would be quickly discovered. We are also currently accelerating the development of a vulnerability-reporting reward program to encourage external software developers to look for and report any vulnerabilities that affect our primary software products.

The threats that Apple faces to hand over its cryptographic signing keys to the US government (or to sign alternate versions of its software for the US government) are no different than threats of force or compromise that any of our developers or our volunteer network operators may face from any actor, governmental or not. For this reason, regardless of the outcome of the Apple decision, we are exploring further ways to eliminate single points of failure, so that even if a government or a criminal obtains our cryptographic keys, our distributed network and its users would be able to detect this fact and report it to us as a security issue.

Like those at Apple, several of our developers have already stated that they would rather resign than honor any request to introduce a backdoor or vulnerability into our software that could be used to harm our users. We look forward to making an official public statement on this commitment as the situation unfolds. However, since requests for backdoors or cryptographic key material so closely resemble many other forms of security failure, we remain committed to researching and developing engineering solutions to further mitigate these risks, regardless of their origin.

We congratulate Apple on their commitment to the privacy and security of their users, and we admire their efforts to advance the debate over the right to privacy and security for all.

Nine Questions about Hidden Services

This is an interview with a Tor developer who works on hidden services. Please note that Tor Browser and hidden services are two different things. Tor Browser (downloadable at TorProject.org) allows you to browse, or surf, the web, anonymously. A hidden service is a site you visit or a service you use that uses Tor technology to stay secure and, if the owner wishes, anonymous. The secure messaging app ricochet is an example of a hidden service. Tor developers use the terms "hidden services" and "onion services" interchangeably.
 
1. What are your priorities for onion services development?
 
Personally I think it’s very important to work on the security of hidden services; that’s a big priority.
 
The plan for the next generation of onion services includes enhanced security as well as improved performance. We’ve broken the development down into smaller modules and we’re already starting to build the foundation. The whole thing is a pretty insane engineering job.

2. What don't people know about onion Services?
 
Until earlier this year, hidden services were a labor of love that Tor developers did in their spare time. Now we have a very small group of developers, but in 2016 we want to move the engineering capacity a bit farther out. There is a lot of enthusiasm within Tor for hidden services but we need funding and more high level developers to build the next generation.
 
3. What are some of Tor's plans for mitigating attacks?
 
The CMU attack was fundamentally a "guard node" attack; guard nodes are the first hop of a Tor circuit and hence the only part of the network that can see the real IP address of a hidden service. Last July we fixed the attack vector that CMU was using (it was called the RELAY_EARLY confirmation attack) and since then we've been divising improved designs for guard node security.
 
For example, in the past, each onion service would have three guard nodes assigned to it. Since last September, each onion service only uses one guard node—-it exposes itself to fewer relays. This change alone makes an attack against an onion service much less likely.
 
Several of our developers are thinking about how to do better guard node selection. One of us is writing code on this right now.
 
We are modeling how onion services pick guard nodes currently, and we're simulating other ways to do it to see which one exposes itself to fewer relays—the fewer relays you are exposed to, the safer you are.
 
We’ve also been working on other security things as well. For instance, a series of papers and talks have abused the directory system of hidden services to try to estimate the activity of particular hidden services, or to launch denial-of-service attacks against hidden services.

We’re going to fix this by making it much harder for the attacker's nodes to become the responsible relay of a hidden service (say, catfacts) and be able to track uptime and usage information. We will use a "distributed random number generator"--many computers teaming up to generate a single, fresh unpredictable random number. 

Another important thing we're doing is to make it impossible for a directory service to harvest addresses in the new design. If you don't know a hidden service address, then under the new system, you won't find it out just by hosting its HSDir entry.

There are also interesting performance things: We want to make .onion services scalable in large infrastructures like Facebook--we want high availability and better load balancing; we want to make it serious.

[Load balancing distributes the traffic load of a website to multiple servers so that no one server gets overloaded with all the users. Overloaded servers stop responding and create other problems. An attack that purposely overloads a website to cause it to stop responding is called a Denial of Service (DoS) attack.  - Kate]
 
There are also onion services that don’t care to stay hidden, like Blockchain or Facebook; we can make those much faster, which is quite exciting.
 
Meanwhile Nick is working on a new encryption design--magic circuit crypto that will make it harder to do active confirmation attacks. [Nick Mathewson is the co-founder of the Tor Project and the chief architect of our software.] Active confirmation attacks are much more powerful than passive attacks, and we can do a better job at defending against them.
 
A particular type of confirmation attack that Nick's new crypto is going to solve is a "tagging attack"—Roger wrote a blog post about them years ago called, "One Cell Is Enough"—it was about how they work and how they are powerful.
 
4. Do you run an onion service yourself?  
 
Yes, I do run onion services; I run an onion services on every box I have.  I connect to the PC in my house from anywhere in the world through SSH—I connect to my onion service instead of my house IP. People can see my laptop accessing Tor but don’t know who I am or where I go. 
 
Also, onion services have a property called NAT-punching; (NAT=Network Address Translation). NAT blocks incoming connections;it builds walls around you. Onion services have NAT punching and can penetrate a firewall. In my university campus, the firewall does not allow incoming connections to my SSH server, but with an onion service the firewall is irrelevant.
 
5. What is your favorite onion service that a nontechnical person might use? 
 
I use ricochet for my peer to peer chatting--It has a very nice UI and works well.
 
6. Do you think it’s safe to run an onion service? 
 
It depends on your adversary. I think onion services provide adequate security against most real life adversaries.

However, if a serious and highly motivated adversary were after me, I would not rely solely on the security of onion services. If your adversary can wiretap the whole Western Internet, or has a million dollar budget, and you only depend on hidden services for your anonymity then you should probably up your game. You can add more layers of anonymity by buying the servers you host your hidden service on anonymously (e.g. with bitcoin) so that even if they deanonymize you, they can't get your identity from the server. Also studying and actually understanding the Tor protocol and its threat model is essential practice if you are defending against motivated adversaries.

7. What onion services don’t exist yet that you would like to see? 
 
Onion services right now are super-volatile; they may appear for three months and then they disappear. For example, there was a Twitter clone, Tor statusnet; it was quite fun--small but cozy. The guy or girl who was running it couldn’t do it any longer. So, goodbye! It would be very nice to have a Twitter clone in onion services. Everyone would be anonymous. Short messages by anonymous people would be an interesting thing.
 
I would like to see apps for mobile phones using onion services more—SnapChat over Tor, Tinder over Tor—using Orbot or whatever. 
 
A good search engine for onion services. This volatility comes down to not having a search engine—you could have a great service, but only 500 sketchoids on the Internet might know about it.
 
Right now, hidden services are misty and hard to see, with the fog of war all around. A sophisticated search engine could highlight the nice things and the nice communities; those would get far more traffic and users and would stay up longer.
 

The second question is how you make things. For many people, it’s not easy to set up an onion service. You have to open Tor, hack some configuration files, and there's more.

We need a system where you double click, and bam, you have an onion service serving your blog. Griffin Boyce is developing a tool for this named Stormy. If we have a good search engine and a way for people to start up onion services easily, we will have a much nicer and more normal Internet in the onion space. 
 
8. What is the biggest misconception about onion services?
  
People don't realize how many use cases there are for onion services or the inventive ways that people are using them already. Only a few onion services ever become well known and usually for the wrong reasons.
 
I think it ties back to the previous discussion--—the onion services we all enjoy have no way of getting to us. Right now, they are marooned on their island of hiddenness. 
 
9. What is the biggest misconception about onion services development?
 
It’s a big and complex project—it’s building a network inside a network; building a thing inside a thing. But we are a tiny team. We need the resources and person power to do it.
 
 

(Interview conducted by Kate Krauss)

Preliminary analysis of Hacking Team's slides

A few weeks ago, Hacking Team was bragging publicly about a Tor Browser exploit. We've learned some details of their proposed attack from a leaked powerpoint presentation that was part of the Hacking Team dump.

The good news is that they don't appear to have any exploit on Tor or on Tor Browser. The other good news is that their proposed attack doesn't scale well. They need to put malicious hardware on the local network of their target user, which requires choosing their target, locating her, and then arranging for the hardware to arrive in the right place. So it's not really practical to launch the attack on many Tor users at once.

But they actually don't need an exploit on Tor or Tor Browser. Here's the proposed attack in a nutshell:

1) Pick a target user (say, you), figure out how you connect to the Internet, and install their attacking hardware on your local network (e.g. inside your ISP).

2) Wait for you to browse the web without Tor Browser, i.e. with some other browser like Firefox or Chrome or Safari, and then insert some sort of exploit into one of the web pages you receive (maybe the Flash 0-day we learned about from the same documents, or maybe some other exploit).

3) Once they've taken control of your computer, they configure your Tor Browser to use a socks proxy on a remote computer that they control. In effect, rather than using the Tor client that's part of Tor Browser, you'll be using their remote Tor client, so they get to intercept and watch your traffic before it enters the Tor network.

You have to stop them at step two, because once they've broken into your computer, they have many options for attacking you from there.

Their proposed attack requires Hacking Team (or your government) to already have you in their sights. This is not mass surveillance — this is very targeted surveillance.

Another answer is to run a system like Tails, which avoids interacting with any local resources. In this case there should be no opportunity to insert an exploit from the local network. But that's still not a complete solution: some coffeeshops, hotels, etc will demand that you interact with their local login page before you can access the Internet. Tails includes what they call their 'unsafe' browser for these situations, and you're at risk during that brief period when you use it.

Ultimately, security here comes down to having safer browsers. We continue to work on ways to make Tor Browser more resilient against attacks, but the key point here is that they'll go after the weakest link on your system — and at least in the scenarios they describe, Tor Browser isn't the weakest link.

As a final point, note that this is just a powerpoint deck (probably a funding pitch), and we've found no indication yet that they ever followed through on their idea.

We'll update you with more information if we learn anything further. Stay safe out there!

iSEC Partners Conducts Tor Browser Hardening Study

In May, the Open Technology Fund commissioned iSEC Partners to study current and future hardening options for the Tor Browser. The Open Technology Fund is the primary funder of Tor Browser development, and it commissions security analysis and review for all of the projects that it funds as a standard practice. We worked with iSEC to define the scope of the engagement to focus on the following six main areas:

  1. Review of the current state of hardening in Tor Browser
  2. Investigate additional hardening options and instrumentation
  3. Perform historical vulnerability analysis on Firefox, in order to make informed vulnerability surface reduction recommendations
  4. Investigate image, audio, and video codecs and their respective library's vulnerability history
  5. Review our current about:config settings, both for vulnerability surface reduction and security
  6. Review alternate/obscure protocol and application handlers


The complete report is available in the iSEC publications github repo. All tickets related to the report can be found using the tbb-isec-report keyword. General Tor Browser security tickets can be found using the tbb-security keyword.

Major Findings and Recommendations

The report had the following high-level findings and recommendations.

  • Address Space Layout Randomization is disabled on Windows and Mac

  • Due to our use of cross-compilation and non-standard toolchains in our reproducible build system, several hardening features have ended up disabled. We have known about the Windows issues prior to this report, and should have a fix for them soon. However, the MacOS issues are news to us, and appear to require that we build 64 bit versions of the Tor Browser for full support. The parent ticket for all basic hardening issues in Tor Browser is bug #10065.

  • Participate in Pwn2Own

  • iSEC recommended that we find a sponsor to fund a Pwn2Own reward for bugs specific to Tor Browser in a semi-hardened configuration. We are very interested in this idea and would love to talk with anyone willing to sponsor us in this competition, but we're not yet certain that our hardening options will have stabilized with enough lead time for the 2015 contest next March.

  • Test and recommend the Microsoft Enhanced Mitigation Experience Toolkit on Windows

  • The Microsoft Enhanced Mitigation Experience Toolkit is an optional toolkit that Windows users can run to further harden Tor Browser against exploitation. We've created bug #12820 for this analysis.

  • Replace the Firefox memory allocator (jemalloc) with ctmalloc/PartitionAlloc

  • PartitionAlloc is a memory allocator designed by Google specifically to mitigate common heap-based vulnerabilities by hardening free lists, creating partitioned allocation regions, and using guard pages to protect metadata and partitions. Its basic hardening features can be picked up by using it as a simple malloc replacement library (as ctmalloc). Bug #10281 tracks this work.

  • Make use of advanced ParitionAlloc features and other instrumentation to reduce the risk from use-after-free vulnerabilities

  • The iSEC vulnerability review found that the overwhelming majority of vulnerabilities to date in Firefox were use-after-free, followed closely by general heap corruption. In order to mitigate these vulnerabilities, we would need to make use of the heap partitioning features of PartitionAlloc to actually ensure that allocations are partitioned (for example, by using the existing tags from Firefox's about:memory). We will also investigate enabling assertions in limited areas of the codebase, such as the refcounting system, the JIT and the Javascript engine.

Vulnerability Surface Reduction (Security Slider)

A large portion of the report was also focused on analyzing historical Firefox vulnerability data and other sources of large vulnerability surface for a planned "Security Slider" UI in Tor Browser.

The Security Slider was first suggested by Roger Dingledine as a way to make it easy for users to trade off between functionality and security, gradually disabling features ranked by both vulnerability count and web prevalence/usability impact.

The report makes several recommendations along these lines, but a brief distillation can be found on the ticket for the slider.

At a high level, we plan for four levels in this slider. "Low" security will be the current Tor Browser settings, with the addition of JIT support. "Medium-Low" will disable most of the JIT, and make HTML5 media click-to-play via NoScript. "Medium-High" will disable the rest of the JIT, will disable JS on non-HTTPS url bar origins, and disable SVG. "High" will fully disable Javascript, block remote fonts via NoScript, and disable all media codecs except for WebM (which will remain click-to-play).

The Long Term

A web browser is a very large and complicated piece of software, and while we believe that the privacy properties of Tor Browser are better than those of every other web browser currently available, it is very important to us that we raise the bar to successful code execution and exploitation of Tor Browser as well.

We are very eager to see the deployment of sandboxing support in Firefox, which should go a long way to improving the security of Tor Browser as well. To improve security for their users, Mozilla has recently shifted 10 engineers into the Electrolysis project, which provides the groundwork for producing a multiprocess sandbox architecture for the desktop Firefox. This will allow them to provide a Google Chrome style security sandbox for website content, to reduce the risk from software vulnerabilities, and generally impede exploitability.

Until that time, we will also be investigating providing hardened builds of Tor Browser using the AddressSanitizer and Virtual Table Verification features of newer GCC releases. While this will not eliminate all vectors of memory corruption-based exploitation (in particular, the hardening properties of AddressSanitizer are not as good as those provided by SoftBounds+CETS for example, but that compiler is not yet production-ready), it should raise the bar to exploitation. We are hopeful that these builds in combination with PartitionAlloc and the Security Slider will satisfy the needs of our users who require high security and who are willing to trade performance and usability in order to get it.

We also hope to include optional application-wide sandboxes for Tor Browser as part of the official distribution.

Why not Google Chrome?

It is no secret that in many ways, both we and Mozilla are playing catch-up to reach the level of code execution security provided by Google Chrome, and in fact closely following the Google Chrome security team was one of the recommendations of the iSEC report.

In particular, Google Chrome benefits from a multiprocess sandboxing architecture, as well as several further hardening options and innovations (such as PartitionAlloc).

Unfortunately, our budget for the browser project is still very constrained compared to the amount of work that is required to provide the privacy properties we feel are important, and Firefox remains a far more cost-effective platform for us for several reasons. In particular, Firefox's flexible extension system, fully scriptable UI, solid proxy support, and its long Extended Support Release cycle all allow us to accomplish far more with fewer resources than we could with any other web browser.

Further, Google Chrome is far less amenable to supporting basic web privacy and Tor-critical features (such as solid proxy support) than Mozilla Firefox. Initial efforts to work with the Google Chrome team saw some success in terms of adding APIs that are crucial to addons such as HTTPS-Everywhere, but we ran into several roadblocks when it came to Tor-specific features and changes. In particular, several bugs required for basic proxy-safe Tor support for Google Chrome's Incognito Mode ended up blocked for various reasons.

The worst offender on this front is the use of the Microsoft Windows CryptoAPI for certificate validation, without any alternative. This bug means that certificate revocation checking and intermediate certificate retrieval happen outside of the browser's proxy settings, and is subject to alteration by the OEM and/or the enterprise administrator. Worse, beyond the Tor proxy issues, the use of this OS certificate validation API means that the OEM and enterprise also have a simple entry point for installing their own root certificates to enable transparent HTTPS man-in-the-middle, with full browser validation and no user consent or awareness.

All of this is not to mention the need for defenses against third party tracking and fingerprinting to prevent the linking of Tor activity to non-Tor usage, and which would also be useful for the wider non-Tor userbase.

While we'd love for this situation to change, and are open to working with Google to improve things, at present it means that our only option for Chrome is to maintain an even more invasive fork than our current Firefox patch set, with much less likelihood of a future merge than with Firefox. As a ballpark estimate, maintaining such a fork would require somewhere between 3 and 5 times the engineering staff and infrastructure we currently have at our disposal, in addition to the ramp-up time to port our current feature set over.

Unless either our funding situation or Google's attitude towards the features we require changes, Mozilla Firefox will remain the best platform for us to demonstrate that it is in fact possible to provide true privacy by design for the web for those who want it. It is very distressing that this means playing catch-up and forcing our users to make usability tradeoffs in exchange for improved browser security, but we will continue to do what we can to improve that situation, both with Mozilla and with our own independent efforts.

Mission Impossible: Hardening Android for Security and Privacy

Updates: See the Changes section for a list of changes since initial posting.

This post has been updated further by the November 2016 Refresh of the same idea

Executive Summary

The future is here, and ahead of schedule. Come join us, the weather's nice.

This blog post describes the installation and configuration of a prototype of a secure, full-featured, Android telecommunications device with full Tor support, individual application firewalling, true cell network baseband isolation, and optional ZRTP encrypted voice and video support. ZRTP does run over UDP which is not yet possible to send over Tor, but we are able to send SIP account login and call setup over Tor independently.

The SIP client we recommend also supports dialing normal telephone numbers if you have a SIP gateway that provides trunking service.

Aside from a handful of binary blobs to manage the device firmware and graphics acceleration, the entire system can be assembled (and recompiled) using only FOSS components. However, as an added bonus, we will describe how to handle the Google Play store as well, to mitigate the two infamous Google Play Backdoors.


Introduction

Android is the most popular mobile platform in the world, with a wide variety of applications, including many applications that aid in communications security, censorship circumvention, and activist organization. Moreover, the core of the Android platform is Open Source, auditable, and modifiable by anyone.

Unfortunately though, mobile devices in general and Android devices in particular have not been designed with privacy in mind. In fact, they've seemingly been designed with nearly the opposite goal: to make it easy for third parties, telecommunications companies, sophisticated state-sized adversaries, and even random hackers to extract all manner of personal information from the user. This includes the full content of personal communications with business partners and loved ones. Worse still, by default, the user is given very little in the way of control or even informed consent about what information is being collected and how.

This post aims to address this, but we must first admit we stand on the shoulders of giants. Organizations like Cyanogen, F-Droid, the Guardian Project, and many others have done a great deal of work to try to improve this situation by restoring control of Android devices to the user, and to ensure the integrity of our personal communications. However, all of these projects have shortcomings and often leave gaps in what they provide and protect. Even in cases where proper security and privacy features exist, they typically require extensive configuration to use safely, securely, and correctly.

This blog post enumerates and documents these gaps, describes workarounds for serious shortcomings, and provides suggestions for future work.

It is also meant to serve as a HOWTO to walk interested, technically capable people through the end-to-end installation and configuration of a prototype of a secure and private Android device, where access to the network is restricted to an approved list of applications, and all traffic is routed through the Tor network.

It is our hope that this work can be replicated and eventually fully automated, given a good UI, and rolled into a single ROM or ROM addon package for ease of use. Ultimately, there is no reason why this system could not become a full fledged off the shelf product, given proper hardware support and good UI for the more technical bits.

The remainder of this document is divided into the following sections:

  1. Hardware Selection
  2. Installation and Setup
  3. Google Apps Setup
  4. Recommended Software
  5. Device Backup Procedure
  6. Removing the Built-in Microphone
  7. Removing Baseband Remnants
  8. Future Work
  9. Changes Since Initial Posting


Hardware Selection

If you truly wish to secure your mobile device from remote compromise, it is necessary to carefully select your hardware. First and foremost, it is absolutely essential that the carrier's baseband firmware is completely isolated from the rest of the platform. Because your cell phone baseband does not authenticate the network (in part to allow roaming), any random hacker with their own cell network can exploit these backdoors and use them to install malware on your device.

While there are projects underway to determine which handsets actually provide true hardware baseband isolation, at the time of this writing there is very little public information available on this topic. Hence, the only safe option remains a device with no cell network support at all (though cell network connectivity can still be provided by a separate device). For the purposes of this post, the reference device is the WiFi-only version of the 2013 Google Nexus 7 tablet.

For users who wish to retain full mobile access, we recommend obtaining a cell modem device that provides a WiFi access point for data services only. These devices do not have microphones and in some cases do not even have fine-grained GPS units (because they are not able to make emergency calls). They are also available with prepaid plans, for rates around $20-30 USD per month, for about 2GB/month of 4G data. If coverage and reliability is important to you though, you may want to go with a slightly more expensive carrier. In the US, T-Mobile isn't bad, but Verizon is superb.

To increase battery life of your cell connection, you can connect this access point to an external mobile USB battery pack, which typically will provide 36-48 hours of continuous use with a 6000mAh battery.

The total cost of a Wifi-only tablet with cell modem and battery pack is only roughly USD $50 more than the 4G LTE version of the same device.

In this way, you achieve true baseband isolation, with no risk of audio or network surveillance, baseband exploits, or provider backdoors. Effectively, this cell modem is just another untrusted router in a long, long chain of untrustworthy Internet infrastructure.

However, do note though that even if the cell unit does not contain a fine-grained GPS, you still sacrifice location privacy while using it. Over an extended period of time, it will be possible to make inferences about your physical activity, behavior and personal preferences, and your identity, based on cell tower use alone.


Installation and Setup

We will focus on the installation of Cyanogenmod 11 using Team Win Recovery Project, both to give this HOWTO some shelf life, and because Cyanogenmod 11 features full SELinux support (Dear NSA: What happened to you guys? You used to be cool. Well, some of you. Some of the time. Maybe. Or maybe not).

The use of Google Apps and Google Play services is not recommended due to security issues with Google Play. However, we do provide workarounds for mitigating those issues, if Google Play is required for your use case.

Installation and Setup: ROM and Core App Installation

With the 2013 Google Nexus 7 tablet, installation is fairly straight-forward. In fact, it is actually possible to install and use the device before associating it with a Google Account in any way. This is a desirable property, because by default, the otherwise mandatory initial setup process of the stock Google ROM sends your device MAC address directly to Google and links it to your Google account (all without using Tor, of course).

The official Cyanogenmod installation instructions are available online, but with a fresh out of the box device, here are the key steps for installation without activating the default ROM code at all (using Team Win Recovery Project instead of ClockWorkMod).

First, on your desktop/laptop computer (preferably Linux), perform the following:

  1. Download the latest CyanogenMod 11 release (we used cm-11-20140504-SNAPSHOT-M6)
  2. Download the latest Team Win Recovery Project image (we used 2.7.0.0)
  3. Download the F-Droid package (we used 0.66)
  4. Download the Orbot package from F-Droid (we used 13.0.7)
  5. Download the Droidwall package from F-Droid (we used 1.5.7)
  6. Download the Droidwall Firewall Scripts attached to this blogpost
  7. Download the Google Apps for Cyanogenmod 11 (optional)


Because the download integrity for all of these packages is abysmal, here is a signed set of SHA256 hashes I've observed for those packages.

Once you have all of those packages, boot your tablet into fastboot mode by holding the Power button and the Volume Down button during a cold boot. Then, attach it to your desktop/laptop machine with a USB cable and run the following commands from a Linux/UNIX shell:

 apt-get install android-tools-adb android-tools-fastboot
 fastboot devices
 fastboot oem unlock
 fastboot flash recovery openrecovery-twrp-2.7.0.0-flo.img


After the recovery firmware is flashed successfully, use the volume keys to select Recovery and hit the power button to reboot the device (or power it off, and then boot holding Power and Volume Up).

Once Team Win boots, go into Wipe and select Advanced Wipe. Select all checkboxes except for USB-OTG, and slide to wipe. Once the wipe is done, click Format Data. After the format completes, issue these commands from your Linux shell:

 adb server start
 adb push cm-11-20140504-SNAPSHOT-M6-flo.zip /sdcard/
 adb push gapps-kk-20140105-signed.zip /sdcard/ # Optional


After this push process completes, go to the Install menu, and select the Cyanogen zip, and optionally the gapps zip for installation. Then click Reboot, and select System.

After rebooting into your new installation, skip all CyanogenMod and Google setup, disable location reporting, and immediately disable WiFi and turn on Airplane mode.

Then, go into Settings -> About Tablet and scroll to the bottom and click the greyed out Build number 5 times until developer mode is enabled. Then go into Settings -> Developer Options and turn on USB Debugging.

After that, run the following commands from your Linux shell:

 adb install FDroid.apk
 adb install org.torproject.android_86.apk
 adb install com.googlecode.droidwall_157.apk


You will need to approve the ADB connection for the first package, and then they should install normally.

VERY IMPORTANT: Whenever you finish using adb, always remember to disable USB Debugging and restore Root Access to Apps only. While Android 4.2+ ROMs now prompt you to authorize an RSA key fingerprint before allowing a debugging connection (thus mitigating adb exploit tools that bypass screen lock and can install root apps), you still risk additional vulnerability surface by leaving debugging enabled.

Installation and Setup: Initial Configuration

After the base packages are installed, go into the Settings app, and make the following changes:

  1. Wireless & Networks More... =>
    • Temporarily Disable Airplane Mode
    • NFC -> Disable
    • Re-enable Airplane Mode
  2. Location Access -> Off
  3. Security =>
    • PIN screen Lock
    • Allow Unknown Sources (For F-Droid)
  4. Language & Input =>
    • Spell Checker -> Android Spell Checker -> Disable Contact Names
    • Disable Google Voice Typing/Hotword detection
    • Android Keyboard (AOSP) =>
      • Disable AOSP next-word suggestion (do this first!)
      • Auto-correction -> Off
  5. Backup & reset =>
    • Enable Back up my data (just temporarily, for the next step)
    • Uncheck Automatic restore
    • Disable Backup my data
  6. About Tablet -> Cyanogenmod Statistics -> Disable reporting
  7. Developer Options -> Device Hostname -> localhost
  8. SuperUser -> Settings (three dots) -> Notifications -> Notification (not toast)
  9. Privacy -> Privacy Guard =>
    • Enabled by default
    • Settings (three dots) -> Show Built In Apps
    • Enable Privacy Guard for every app with the following exceptions:
      • Calendar
      • Config Updater
      • Google Account Manager (long press)
        • Modify Settings -> Off
        • Wifi Change -> Off
        • Data Change -> Off
      • Google Play Services (long press)
        • Location -> Off
        • Modify Settings -> Off
        • Draw on top -> Off
        • Record Audio -> Off
        • Wifi Change -> Off
      • Google Play Store (long press)
        • Location -> Off
        • Send SMS -> Off
        • Modify Settings -> Off
        • Data change -> Off
      • Google Services Framework (long press)
        • Modify Settings -> Off
        • Wifi Change -> Off
        • Data Change -> Off
      • Trebuchet


Now, it is time to encrypt your tablet. It is important to do this step early, as I have noticed additional apps and configuration tweaks can make this process fail later on.

We will also do this from the shell, in order to set a different password than your screen unlock pin. This is done to mitigate the risk of compromise of this password from shoulder surfers, and to allow the use of a much longer (and non-numeric) password that you would prefer not to type every time you unlock the screen.

To do this, open the Terminal app, and type the following commands:

su
vdc cryptfs enablecrypto inplace NewMoreSecurePassword


Watch for typos! That command does not ask you to re-type that password for confirmation.

Installation and Setup: Disabling Invasive Apps and Services

Before you configure the Firewall or enable the network, you likely want to disable at least a subset of the following built-in apps and services, by using Settings -> Apps -> All, and then clicking on each app and hitting the Disable button:

  • com.android.smspush
  • com.google.android.voicesearch
  • Face Unlock
  • Google Backup Transport
  • Google Calendar Sync
  • Google One Time Init
  • Google Partner Setup
  • Google Contacts Sync
  • Google Search
  • Hangouts
  • Market Feedback Agent
  • News & Weather
  • One Time Init
  • Picasa Updater
  • Sound Search for Google Play
  • TalkBack


Installation and Setup: Tor and Firewall configuration

Ok, now let's install the firewall and tor support scripts. Go back into Settings -> Developer Options and enable USB Debugging and change Root Access to Apps and ADB. Then, unzip the android-firewall.zip on your laptop, and run the installation script:

./install-firewall.sh


That firewall installation provides several key scripts that provide functionality that is currently impossible to achieve with any app (including Orbot):

  1. It installs a userinit script to block all network access during boot.
  2. It disables "Google Captive Portal Detection", which involves connection attempts to Google servers upon Wifi assocation (these requests are made by the Android Settings UID, which should normally be blocked from the network, unless you are first registering for Google Play).
  3. It contains a Droidwall script that configures Tor transproxy rules to send all of your traffic through Tor. These rules include a fix for a Linux kernel Tor transproxy packet leak issue.
  4. The main firewall-torify-all.sh Droidwall script also includes an input firewall, to block all inbound connections to the device. It also fixes a Droidwall permissions vulnerability
  5. It installs an optional script to allow the Browser app to bypass Tor for logging into WiFi captive portals.
  6. It installs an optional script to temporarily allow network adb access when you need it (if you are paranoid about USB exploits, which you should be).
  7. It provides an optional script to allow the UDP activity of LinPhone to bypass Tor, to allow ZRTP-encrypted Voice and Video SIP/VoIP calls. SIP account login/registration and call setup/signaling can be done over TCP, and Linphone's TCP activity is still sent through Tor with this script.


Note that with the exception of the userinit network blocking script, installing these scripts does not activate them. You still need to configure Droidwall to use them.

We use Droidwall instead of Orbot or AFWall+ for five reasons:

  1. Droidwall's app-based firewall and Orbot's transproxy are known to conflict and reset one another.
  2. Droidwall does not randomly drop transproxy rules when switching networks (Orbot has had several of these types of bugs).
  3. Unlike AFWall+, Droidwall is able to auto-launch at "boot" (though still not before the network and Android Services come online and make connections).
  4. AFWall+'s "fix" for this startup data leak problem does not work on Cyanogenmod (hence our userinit script instead).
  5. Aside from the permissions issue fixed by our firewall-torify-all.sh script, AFWall+ provides no additional security fixes over the stock Droidwall.


To make use of the firewall scripts, open up Droidwall and hit the config button (the vertical three dots), go to More -> Set Custom Script. Enter the following:

. /data/local/firewall-torify-all.sh
#. /data/local/firewall-allow-adb.sh
#. /data/local/firewall-allow-linphone-udp.sh
#. /data/local/firewall-allow-nontor-browser.sh


NOTE: You must not make any typos in the above. If you mistype any of those files, things may break. Because the userinit.sh script blocks all network at boot, if you make a typo in the torify script, you will be unable to use the Internet at all!

Also notice that these scripts have been installed into a readonly root directory. Because they are run as root, installing them to a world-writable location like /sdcard/ is extremely unwise.

Later, if you want to enable one of network adb, LinPhone UDP, or captive portal login, go back into this window and remove the leading comment ('#') from the appropriate lines (this is obviously one of the many aspects of this prototype that could benefit from real UI).

Then, configure the apps you want to allow to access the network. Note that the only Android system apps that must access the network are:

  • CM Updater
  • Downloads, Media Storage, Download Manager
  • F-Droid


Orbot's network access is handled via the main firewall-torify-all.sh script. You do not need to enable full network access to Orbot in Droidwall.

The rest of the apps you can enable at your discretion. They will all be routed through Tor automatically.

Once Droidwall is configured, you can click on the Menu (three dots) and click the "Firewall Disabled" button to enable the firewall. Then, you can enable Orbot. Do not grant Orbot superuser access. It still opens the transproxy ports you need without root, and Droidwall is managing installation of the transproxy rules, not Orbot.

You are now ready to enable Wifi and network access on your device. For vulnerability surface reduction, you may want to use the Advanced Options -> Static IP to manually enter an IP address for your device to avoid using dhclient. You do not need a DNS server, and can safely set it to 127.0.0.1.


Google Apps Setup

If you installed the Google Apps zip, you need to do a few things now to set it up, and to further harden your device. If you opted out of Google Apps, you can skip to the next section.

Google Apps Setup: Initializing Google Play

The first time you use Google Play, you will need to enable four apps in Droidwall: "Google Account Manager, Google Play Services...", "Settings, Dev Tools, Fused Location...", "Gmail", and "Google Play" itself.

If you do not have a Google account, your best bet is to find open wifi to create one, as Google will often block accounts created through Tor, even if you use an Android device.

After you log in for the first time, you should be able to disable the "Google Account Manager, Google Play Services...", "Gmail", and the "Settings..." apps in Droidwall, but your authentication tokens in Google Play may expire periodically. If this happens, you should only need to temporarily enable the "Google Account Manager, Google Play Services..." app in Droidwall to obtain new ones.

Google Apps Setup: Mitigating the Google Play Backdoors

If you do choose to use Google Play, you need to be very careful about how you allow it to access the network. In addition to the risks associated with using a proprietary App Store that can send you targeted malware-infected packages based on your Google Account, it has at least two major user experience flaws:

  1. Anyone who is able to gain access to your Google account can silently install root or full permission apps without any user interaction what-so-ever. Once installed, these apps can retroactively clear what little installation notification and UI-based evidence of their existence there was in the first place.
  2. The Android Update Process does not inform the user of changes in permissions of pending update apps that happen to get installed after an Android upgrade.


The first issue can be mitigated by ensuring that Google Play does not have access to the network when not in use, by disabling it in Droidwall. If you do not do this, apps can be installed silently behind your back. Welcome to the Google Experience.

For the second issue, you can install the SecCheck utility, to monitor your apps for changes in permissions during a device upgrade.

Google Apps Setup: Disabling Google Cloud Messaging

If you have installed the Google Apps zip, you have also enabled a feature called Google Cloud Messaging.

The Google Cloud Messaging Service allows apps to register for asynchronous remote push notifications from Google, as well as send outbound messages through Google.

Notification registration and outbound messages are sent via the app's own UID, so using Droidwall to disable network access by an app is enough to prevent outbound data, and notification registration. However, if you ever allow network access to an app, and it does successfully register for notifications, these notifications can be delivered even when the app is once again blocked from accessing the network by Droidwall.

These inbound notifications can be blocked by disabling network access to the "Google Account Manager, Google Play Services, Google Services Framework, Google Contacts Sync" in Droidwall. In fact, the only reason you should ever need to enable network access by this service is if you need to log in to Google Play again if your authentication tokens ever expire.

If you would like to test your ability to control Google Cloud Messaging, there are two apps in the Google Play store than can help with this. GCM Test allows for simple send and receive pings through GCM. Push Notification Tester will allow you to test registration and asynchronous GCM notification.


Recommended Privacy and Auditing Software

Ok, so now that we have locked down our Android device, now for the fun bit: secure communications!

We recommend the following apps from F-Droid:

  1. Xabber
  2. Xabber is a full Java implementation of XMPP, and supports both OTR and Tor. Its UI is a bit more streamlined than Guardian Project's ChatSecure, and it does not make use of any native code components (which are more vulnerable to code execution exploits than pure Java code). Unfortunately, this means it lacks some of ChatSecure's nicer features, such as push-to-talk voice and file transfer.

    Despite better protection against code execution, it does have several insecure default settings. In particular, you want to make the following changes:

    • Notifications -> Message text in Notifications -> Off (notifications can be read by other apps!)
    • Accounts -> Integration into system accounts -> Off
    • Accounts -> Store message history -> Don't Store
    • Security -> Store History -> Off
    • Security -> Check Server Certificate
    • Chat -> Show Typing Notifications -> Off
    • Connection Settings -> Auto-away -> disabled
    • Connection Settings -> Extended away when idle -> Disabled
    • Keep Wifi Awake -> On
    • Prevent sleep Mode -> On
  3. Offline Calendar
  4. Offline Calendar is a hack to allow you to create a fake local Google account that does not sync to Google. This allows you to use the Calendar App without risk of leaking your activities to Google. Note that you must exempt both this app and Calendar from Privacy Guard for it to function properly.

  5. LinPhone
  6. LinPhone is a FOSS SIP client that supports TCP TLS signaling and ZRTP. Note that neither TLS nor ZRTP are enabled by default. You must manually enable them in Settings -> Network -> Transport and Settings -> Network -> Media Encryption.

    ostel.co is a free SIP service run by the Guardian Project that supports only TLS and ZRTP, but does not allow outdialing to normal PSTN telephone numbers. While Bitcoin has many privacy issues of its own, the Bitcoin community maintains a couple lists of "trunking" providers that allow you to obtain a PSTN phone number in exchange for Bitcoin payment.

  7. Plumble

    Plumble is a Mumble client that will route voice traffic over Tor, which is useful if you would like to communicate with someone over voice without revealing your IP to them, or your activity to a local network observer. However, unlike Linphone, voice traffic is not end-to-end encrypted, so the Mumble server can listen to your conversations.

  8. K-9 Mail and APG

    K-9 Mail is a POP/IMAP client that supports TLS and integrates well with APG, which will allow you to send and receive GPG-encrypted mail easily. Before using it, you should be aware of two things: It identifies itself in your mail headers, which opens you up to targeted attacks specifically tailored for K-9 Mail and/or Android, and by default it includes the subject of messages in mail notifications (which is bad, because other apps can read notifications). There is a privacy option to disable subject text in notifications, but there is no option to disable the user agent in the mail headers.

  9. OSMAnd~
  10. A free offline mapping tool. While the UI is a little clunky, it does support voice navigation and driving directions, and is a handy, private alternative to Google Maps.

  11. VLC

    The VLC port in F-Droid is a fully capable media player. It can play mp3s and most video formats in use today. It is a handy, private alternative to Google Music and other closed-source players that often report your activity to third party advertisers. VLC does not need network access to function.

  12. Firefox
  13. We do not yet have a port of Tor Browser for Android (though one is underway -- see the Future Work section). Unless you want to use Google Play to get Chrome, Firefox is your best bet for a web browser that receives regular updates (the built in Browser app does not). HTTPS-Everywhere and NoScript are available, at least.

  14. Bitcoin
  15. Bitcoin might not be the most private currency in the world. In fact, you might even say it's the least private currency in the world. But, it is a neat toy.

  16. Launch App Ops

    The Launch App Ops app is a simple shortcut into the hidden application permissions editor in Android. A similar interface is available through Settings -> Privacy -> Privacy Guard, but a direct shortcut to edit permissions is handy. It also displays some additional system apps that Privacy Guard omits.

  17. Permissions

    The Permissions app gives you a view of all Android permissions, and shows you which apps have requested a given permission. This is particularly useful to disable the record audio permission for apps that you don't want to suddenly decide to listen to you. (Interestingly, the Record Audio permission disable feature was broken in all Android ROMs I tested, aside from Cyanogenmod 11. You can test this yourself by revoking the permission from the Sound Recorder app, and verifying that it cannot record.)

  18. CatLog
  19. In addition to being supercute, CatLog is an excellent Android monitoring and debugging tool. It allows you to monitor and record the full set of Android log events, which can be helpful in diagnosing issues with apps.

  20. OS Monitor
  21. OS Monitor is an excellent Android process and connection monitoring app, that can help you watch for CPU usage and connection attempts by your apps.

  22. Intent Intercept
  23. Intent Intercept allows you to inspect and extract Android Intent content without allowing it to get forwarded to an actual app. This is useful for monitoring how apps attempt to communicate with eachother, though be aware it only covers one of the mechanisms of inter-app communication in Android.


Backing up Your Device Without Google

Now that your device is fully configured and installed, you probably want to know how to back it up without sending all of your private information directly to Google. While the Team Win Recovery Project will back up all of your system settings and apps (even if your device is encrypted), it currently does not back up the contents of your virtualized /sdcard. Remembering to do a couple adb pulls of key directories can save you a lot of heartache should you suffer some kind of data loss or hardware failure (or simply drop your tablet on a bridge while in a rush to catch a train).

The backup.sh script uses adb to pull your Download and Pictures directories from the /sdcard, as well as pulls the entire TWRP backup directory.

Before you use that script, you probably want to delete old TWRP backup folders so as to only pull one backup, to reduce pull time. These live in /sdcard/TWRP/BACKUPS/, which is also known as /storage/emulated/0/TWRP/BACKUPS in the File Manager app.

To use this script over the network without a usb cable, enable both USB Debugging and ADB Over Network in your developer settings. The script does not require you to enable root access from adb, and you should not enable root because it takes quite a while to run a backup, especially if you are using network adb.

Prior to using network adb, you must edit your Droidwall custom scripts to allow it (by removing the '#' in the #. /data/local/firewall-allow-adb.sh line you entered earlier), and then run the following commands from a non-root Linux shell on your desktop/laptop (the ADB Over Network setting will tell you the IP and port):

killall adb
adb connect ip:5555

VERY IMPORTANT: Don't forget to disable USB Debugging, as well as the Droidwall adb exemption when you are done with the backup!


Removing the Built-in Microphone

If you would really like to ensure that your device cannot listen to you even if it is exploited, it turns out it is very straight-forward to remove the built-in microphone in the Nexus 7. There is only one mic on the 2013 model, and it is located just below the volume buttons (the tiny hole).

To remove it, all you need to do is pop off the the back panel (this can be done with your fingernails, or a tiny screwdriver), and then you can shave the microphone right off that circuit board, and reattach the panel. I have done this to one of my devices, and it was subsequently unable to record audio at all, without otherwise affecting functionality.

You can still use apps that require a microphone by plugging in headphone headset that contains a mic built in (these cost around $20 and you can get them from nearly any consumer electronics store). I have also tested this, and was still able to make a Linphone call from a device with the built in microphone removed, but with an external headset. Note that the 2012 Nexus 7 does not support these combination microphone+headphone jacks (and it has a secondary microphone as well). You must have the 2013 model.

The 2013 Nexus 7 Teardown video can give you an idea of what this looks like before you try it. Again you do not need to fully disassemble the device - you only need to remove the back cover.

Pro-Tip: Before you go too crazy and start ripping out the cameras too, remember that you can cover the cameras with a sticker or tape when not in use. I have found that regular old black electrical tape applies seamlessly, is non-obvious to casual onlookers, and is easy to remove without smudging or gunking up the lenses. Better still, it can be removed and reapplied many times without losing its adhesive.


Removing the Remnants of the Baseband

There is one more semi-hardware mod you may want to make, though.

It turns out that the 2013 Wifi Nexus 7 does actually have a partition that contains a cell network baseband firmware on it, located on the filesystem as the block device /dev/block/platform/msm_sdcc.1/by-name/radio. If you run strings on that block device from the shell, you can see that all manner of CDMA and GSM log messages, comments, and symbols are present in that partition.

According to ADB logs, Cyanogenmod 11 actually does try to bring up a cell network radio at boot on my Wifi-only Nexus 7, but fails due to it being disabled. There is also a strong economic incentive for Asus and Google to make it extremely difficult to activate the baseband even if the hardware is otherwise identical for manufacturing reasons, since they sell the WiFi-only version for $100 less. If it were easy to re-enable the baseband, HOWTOs would exist (which they do not seem to, at least not yet), and they would cut into their LTE device sales.

Even so, since we lack public schematics for the Nexus 7 to verify that cell components are actually missing or hardware-disabled, it may be wise to wipe this radio firmware as well, as defense in depth.

To do this, open the Terminal app, and run:

su
cd /dev/block/platform/msm_sdcc.1/by-name
dd if=/dev/zero of=./radio


I have wiped that partition while the device was running without any issue, or any additional errors from ADB logs.

Note that an anonymous commenter also suggested it is possible to disable the baseband of a cell-enabled device using a series of Android service disable commands, and by wiping that radio block device. I have not tested this on a device other than the WiFI-only Nexus 7, though, so proceed with caution. If you try those steps on a cell-enabled device, you should archive a copy of your radio firmware first by doing something like the following from that dev directory that contains the radio firmware block device.

dd if=./radio of=/sdcard/radio.img


If anything goes wrong, you can restore that image with:

dd if=/sdcard/radio.img of=./radio


Future Work

In addition to streamlining the contents of this post into a single additional Cyanogenmod installation zip or alternative ROM, the following problems remain unsolved.

Future Work: Better Usability

While arguably very secure, this system is obviously nowhere near usable. Here are some potential improvements to the user interface, based on a brainstorming session I had with another interested developer.

First of all, the AFWall+/Droidwall UI should be changed to be a tri-state: It should allow you to send app traffic over Tor, over your normal internet connection, or block it entirely.

Next, during app installation from either F-Droid or Google Play (this is an Intent another addon app can actually listen for), the user should be given the chance to decide if they would like that app's traffic to be routed over Tor, use the normal Internet connection, or be blocked entirely from accessing the network. Currently, the Droidwall default for new apps is "no network", which is a great default, but it would be nice to ask users what they would like to do during actual app installation.

Moreover, users should also be given a chance to edit the app's permissions upon installation as well, should they desire to do so.

The Google Play situation could also be vastly improved, should Google itself still prove unwilling to improve the situation. Google Play could be wrapped in a launcher app that automatically grants it network access prior to launch, and then disables it upon leaving the window.

A similar UI could be added to LinPhone. Because the actual voice and video transport for LinPhone does not use Tor, it is possible for an adversary to learn your SIP ID or phone number, and then call you just for the purposes of learning your IP. Because we handle call setup over Tor, we can prevent LinPhone from performing any UDP activity, or divulging your IP to the calling party prior to user approval of the call. Ideally, we would also want to inform the user of the fact that incoming calls can be used to obtain information about them, at least prior to accepting their first call from an unknown party.

Future Work: Find Hardware with Actual Isolated Basebands

Related to usability, it would be nice if we could have a serious community effort to audit the baseband isolation properties of existing cell phones, so we all don't have to carry around these ridiculous battery packs and sketch-ass wifi bridges. There is no engineering reason why this prototype could not be just as secure if it were a single piece of hardware. We just need to find the right hardware.

A random commenter claimed that the Galaxy Nexus might actually have exactly the type of baseband isolation we want, but the comment was from memory, and based on software reverse engineering efforts that were not publicly documented. We need to do better than this.

Future Work: Bug Bounty Program

If there is sufficient interest in this prototype, and/or if it gets transformed into a usable addon package or ROM, we may consider running a bug bounty program where we accept donations to a dedicated Bitcoin address, and award the contents of that wallet to anyone who discovers a Tor proxy bypass issue or remote code execution vulnerability in any of the network-enabled apps mentioned in this post (except for the Browser app, which does not receive security updates).

Future Work: Port Tor Browser to Android

The Guardian Project is undertaking a port of Tor Browser to Android as part of their OrFox project. This will greatly improve the privacy of your web browsing experience on the Android device over both Firefox and Chrome. We look forward to helping them in any way we can with this effort.

Future Work: WiFi MAC Address Randomization

It is actually possible to randomize the WiFi MAC address on the Google Nexus 7. The closed-source root app Mac Spoofer is able to modify the device MAC address using Qualcomm-specific methods in such a way that the entire Android OS becomes convinced that this is your actual MAC.

However, doing this requires installation of a root-enabled, closed-source application from the Google Play Store, which we believe is extremely unwise on a device you need to be able to trust. Moreover, this app cannot be autorun on boot, and your MAC address will also reset every time you disable the WiFi interface (which is easy to do accidentally). It also supports using only a single, manually entered MAC address.

Hardware-independent techniques (such as a the Terminal command busybox ifconfig wlan0 hw ether <mac>) appear to interfere with the WiFi management system and prevent it from associating. Moreover, they do not cause the Android system to report the new MAC address, either (visible under Settings -> About Tablet -> Status).

Obviously, an Open Source F-Droid app that properly resets (and automatically randomizes) the MAC every time the WiFi interface is brought up is badly needed.

Future Work: Disable Probes for Configured Wifi Networks

The Android OS currently probes for all of your configured WiFi networks while looking for open wifi to connect to. Configured networks should not be probed for explictly unless activity for their BSSID is seen. The xda-developers forum has a limited fix to change scanning behavior, but users report that it does not disable the active probing behavior for any "hidden" networks that you have configured.

Future Work: Recovery ROM Password Protection

An unlocked recovery ROM is a huge vulnerability surface for Android. While disk encryption protects your applications and data, it does not protect many key system binaries and boot programs. With physical access, it is possible to modify these binaries through your recovery ROM.

The ability to set a password for the Team Win recovery ROM in such a way that a simple "fastboot flash recovery" would overwrite would go a long way to improving device security. At least it would become evident to you if your recovery ROM has been replaced, in this case (due to the absence of the password).

It may also be possible to restore your bootloader lock as an alternative, but then you lose the ability to make backups of your system using Team Win.

Future Work: Disk Encryption via TPM or Clever Hacks

Unfortunately, even disk encryption and a secure recovery firmware is not enough to fully defend against an adversary with an extended period of physical access to your device.

Cold Boot Attacks are still very much a reality against any form of disk encryption, and the best way to eliminate them is through hardware-assisted secure key storage, such as through a TPM chip on the device itself.

It may also be possible to mitigate these attacks by placing key material in SRAM memory locations that will be overwritten as part of the ARM boot process. If these physical memory locations are stable (and for ARM systems that use the SoC SRAM to boot, they will be), rebooting the device to extract key material will always end up overwriting it. Similar ARM CPU-based encryption defenses have also been explored in the research literature.

Future Work: Download and Build Process Integrity

Beyond the download integrity issues mentioned above, better build security is also deeply needed by all of these projects. A Gitian descriptor that is capable of building Cyanogenmod and arbitrary F-Droid packages in a reproducible fashion is one way to go about achieving this property.

Future Work: Removing Binary Blobs

If you read the Cyanogenmod build instructions closely, you can see that it requires extracting the binary blobs from some random phone, and shipping them out. This is the case with most ROMs. In fact, only the Replicant Project seems concerned with this practice, but regrettably they do not support any wifi-only devices. This is rather unfortunate, because no matter what they do with the Android OS on existing cell-enabled devices, they will always be stuck with a closed source, backdoored baseband that has direct access to the microphone, if not the RAM and the entire Android OS.

Kudos to them for finding one of the backdoors though, at least.


Changes Since Initial Posting

  1. Updated firewall scripts to fix Droidwall permissions vulnerability.
  2. Updated Applications List to recommend VLC as a free media player.
  3. Mention the Guardian Project's planned Tor Browser port (called OrFox) as Future Work.
  4. Mention disabling configured WiFi network auto-probing as Future Work
  5. Updated the firewall install script (and the android-firewall.zip that contains it) to disable "Captive Portal detection" connections to Google upon WiFi association. These connections are made by the Settings service user, which should normally be blocked unless you are Activating Google Play for the first time.
  6. Updated the Executive Summary section to make it clear that our SIP client can actually also make normal phone calls, too.
  7. Document removing the built-in microphone, for the truly paranoid folk out there.
  8. Document removing the remnants of the baseband, or disabling an existing baseband.
  9. Update SHA256SUM of FDroid.apk for 0.63
  10. Remove multiport usage from firewall-torify-all.sh script (and update android-firewall.zip).
  11. Add pro-tip to the microphone removal section: Don't remove your cameras. Black electrical tape works just fine, and can be removed and reapplied many times without smudges.
  12. Update android-firewall.zip installation and documentation to use /data/local instead of /etc. CM updates will wipe /etc, of course. Woops. If this happened to you while updating to CM-11-M5, download that new android-firewall.zip and run install-firewall.sh again as per the instructions above, and update your Droidwall custom script locations to use /data/local.
  13. Update the Future work section to describe some specific UI improvements.
  14. Update the Future work section to mention that we need to find hardware with actual isolated basebands. Duh. This should have been in there much earlier.
  15. Update the versions for everything
  16. Suggest enabling disk crypto directly from the shell, to avoid SSD leaks of the originally PIN-encrypted device key material.
  17. GMail network access seems to be required for App Store initialization now. Mention this in Google Apps section.
  18. Mention K-9 Mail, APG, and Plumble in the Recommended Apps section.
  19. Update the Firewall instructions to clarify that you need to ensure there are no typos in the scripts, and actually click the Droidwall UI button to enable the Droidwall firewall (otherwise networking will not work at all due to userinit.sh).
  20. Disable NFC in Settings config

Tor Browser Bundle 3.0beta1 Released

The first beta release in the 3.0 series of the Tor Browser Bundle is now available from the Tor Package Archive:
https://archive.torproject.org/tor-package-archive/torbrowser/3.0b1/

This release includes important security updates to Firefox, as well as a fix for a startup crash bug on Windows XP.

This release also reorganizes the bundle directory structure to simplify implementation of the FIrefox updater in future releases. This means that extracting the bundle over previous installation will likely not preserve your preferences or bookmarks, and may cause other issues.

This release has also introduced a build reproducibility issue on Windows, hence it is signed only by two keys. We should have this issue fixed by the next beta.

Here is the complete ChangeLog:

  • All Platforms:
    • Update Firefox to 17.0.10esr
    • Update NoScript to 2.6.8.2
    • Update HTTPS-Everywhere to 3.4.2
    • Bug #9114: Reorganize the bundle directory structure to ease future autoupdates
    • Bug #9173: Patch Tor Browser to auto-detect profile directory if launched without the wrapper script.
    • Bug #9012: Hide Tor Browser infobar for missing plugins.
    • Bug #8364: Change the default entry page for the addons tab to the installed addons page.
    • Bug #9867: Make flash objects really be click-to-play if flash is enabled.
    • Bug #8292: Make getFirstPartyURI log+handle errors internally to simplify caller usage of the API
    • Bug #3661: Remove polipo and privoxy from the banned ports list.
    • misc: Fix a potential memory leak in the Image Cache isolation
    • misc: Fix a potential crash if OS theme information is ever absent
    • Update Tor-Launcher to 0.2.3.1-beta
      • Bug #9114: Handle new directory structure
      • misc: Tor Launcher now supports Thunderbird
    • Update Torbutton to 1.6.4
      • Bug #9224: Support multiple Tor socks ports for about:tor status check
      • Bug #9587: Add TBB version number to about:tor
      • Bug #9144: Workaround to handle missing translation properties
  • Windows:
    • Bug #9084: Fix startup crash on Windows XP.
  • Linux:
    • Bug #9487: Create detached debuginfo files for Linux Tor and Tor Browser binaries.

Deterministic Builds Part Two: Technical Details

This is the second post in a two-part series on the build security improvements in the Tor Browser Bundle 3.0 release cycle.

The first post described why such security is necessary. This post is meant to describe the technical details with respect to how such builds are produced.

We achieve our build security through a reproducible build process that enables anyone to produce byte-for-byte identical binaries to the ones we release. Elsewhere on the Internet, this process is varyingly called "deterministic builds", "reproducible builds", "idempotent builds", and probably a few other terms, too.

To produce byte-for-byte identical packages, we use Gitian to build Tor Browser Bundle 3.0 and above, but that isn't the only option for achieving reproducible builds. We will first describe how we use Gitian, and then go on to enumerate the individual issues that Gitian solves for us, and that we had to solve ourselves through either wrapper scripts, hacks, build process patches, and (in one esoteric case for Windows) direct binary patching.

Gitian: What is it?

Gitian is a thin wrapper around the Ubuntu virtualization tools written in a combination of Ruby and bash. It was originally developed by Bitcoin developers to ensure the build security and integrity of the Bitcoin software.

Gitian uses Ubuntu's python-vmbuilder to create a qcow2 base image for an Ubuntu version and architecture combination and a set of git and tarball inputs that you specify in a 'descriptor', and then proceeds to run a shell script that you provide to build a component inside that controlled environment. This build process produces an output set that includes the compiled result and another "output descriptor" that captures the versions and hashes of all packages present on the machine during compilation.

Gitian requires either Intel VT support (for qemu-kvm), or LXC support, and currently only supports launching Ubuntu build environments from Ubuntu itself.

Gitian: How Tor Uses It

Tor's use of Gitian is slightly more automated and also slightly different than how Bitcoin uses it.

First of all, because Gitian supports only Ubuntu hosts and targets, we must cross compile everything for Windows and MacOS. Luckily, Mozilla provides support for MinGW-w64 as a "third tier" compiler, and does endeavor to work with the MinGW team to fix issues as they arise.

To our further good fortune, we were able to use a MacOS cross compiler created by Ray Donnelly based on a fork of "Toolchain4". We owe Ray a great deal for providing his compilers to the public, and he has been most excellent in terms of helping us through any issues we encountered with them. Ray is also working to merge his patches into the crosstools-ng project, to provide a more seamless build process to create rebuilds of his compilers. As of this writing, we are still using his binaries in combination with the flosoft MacOS10.X SDK.

For each platform, we build the components of Tor Browser Bundle in 3 stages, with one descriptor per stage. The first descriptor builds Tor and its core dependency libraries (OpenSSL, libevent, and zlib) and produces one output zip file. The second descriptor builds Firefox. The third descriptor combines the previous two outputs along with our included Firefox addons and localization files to produce the actual localized bundle files.

We provide a Makefile and shellscript-based wrapper around Gitian to automate the download and authentication of our source inputs prior to build, and to perform a final step that creates a sha256sums.txt file that lists all of the bundle hashes, and can be signed by any number of detached signatures, one for each builder.

It is important to distribute multiple cryptographic signatures to prevent targeted attacks against stealing a single build signing key (because the signing key itself is of course another single point of failure). Unfortunately, GPG currently lacks support for verifying multiple signatures of the same document. Users must manually copy each detached signature they wish to verify into its proper .asc filename suffix. Eventually, we hope to create a stub installer or wrapper script of some kind to simplify this step, as well as add multi-signature support to the Firefox update process. We are also investigating adding a URL and a hash of the package list the Tor Consensus, so that the Tor Consensus document itself authenticates our binary packages.

We do not use the Gitian output descriptors or the Gitian signing tools, because the tools are used to sign Gitian's output descriptors. We found that Ubuntu's individual packages (which are listed in the output descriptors) varied too frequently to allow this mechanism to be reproducible for very long. However, we do include a list of input versions and hashes used to produce each bundle in the bundle itself. The format of this versions file is the same that we use as input to download the sources. This means it should remain possible to re-build an arbitrary bundle for verification at a later date, assuming that any later updates to Ubuntu's toolchain packages do not change the output.

Gitian: Pain Points

Gitian is not perfect. In fact, many who have tried our build system have remarked that it is not even close to deterministic (and that for this and other reasons 'Reproducible Builds' is a better term). In fact, it seems to experience build failures for quite unpredictible reasons related to bugs in one or more of qemu-kvm/LXC, make, qcow copy-on-write image support. These bugs are often intermittent, and simply restarting the build process often causes things to proceed smoothly. This has made the bugs exceedingly tricky to pinpoint and diagnose.

Gitian's use of tags (especially signed tags) has some bugs and flaws. For this reason, we verify signatures ourselves after input fetching, and provide gitian only with explicit commit hashes for the input source repositories.

We maintain a list of the most common issues in the build instructions.

Remaining Build Reproducibility Issues

By default, the Gitian VM environment controls the following aspects of the build platform that normally vary and often leak into compiled software: hostname, build path, uname output, toolchain version, and time.

However, Gitian is not enough by itself to magically produce reproducible builds. Beyond what Gitian provides, we had to patch a number of reproducibility issues in Firefox and some of the supporting tools on each platform. These include:

  1. Reordering due to inode ordering differences (exposed via Python's os.walk())

    Several places in the Firefox build process use python scripts to repackage both compiled library archives and zip files. In particular, they tend to obtain directory listings using os.walk(), which is dependent upon the inode ordering of the filesystem. We fix this by sorting those file lists in the applicable places.

  2. LC_ALL localizations alter sorting order

    Sorting only gets you so far, though, if someone from a different locale is trying to reproduce your build. Differences in your character sets will cause these sort orders to differ. For this reason, we set the LC_ALL environment variable to 'C' at the top of our Gitian descriptors.

  3. Hostname and other OS info leaks in LXC mode

    For these cases, we simply patch the pieces of Firefox that include the hostname (primarily for about:buildconfig).

  4. Millisecond and below timestamps are not fixed by libfaketime

    Gitian relies on libfaketime to set the clock to a fixed value to deal with embedded timestamps in archives and in the build process. However, in some places, Firefox inserts millisecond timestamps into its supporting libraries as part of an informational structure. We simply zero these fields.

  5. FIPS-140 mode generates throwaway signing keys

    A rather insane subsection of the FIPS-140 certification standard requires that you distribute signatures for all of your cryptographic libraries. The Firefox build process meets this requirement by generating a temporary key, using it to sign the libraries, and discarding the private portion of that key. Because there are many other ways to intercept the crypto outside of modifying the actual DLL images, we opted to simply remove these signature files from distribution. There simply is no way to verify code integrity on a running system without both OS and coprocessor assistance. Download package signatures make sense of course, but we handle those another way (as mentioned above).

  6. On Windows builds, something mysterious causes 3 bytes to randomly vary
    in the binary.

    Unable to determine the source of this, we just bitstomp the binary and regenerate the PE header checksums using strip... Seems fine so far! ;)

  7. umask leaks into LXC mode in some cases

    We fix this by manually setting umask at the top of our Gitian descriptors. Additionally, we found that we had to reset the permissions inside of tar and zip files, as the umask didn't affect them on some builds (but not others...)

  8. Zip and Tar reordering and attribute issues

    To aid with this and other issues with reproducibility, we created simple shell wrappers for zip and tar to eliminate the sources of non-determinism.

  9. Timezone leaks

    To deal with these, we set TZ=UTC at the top of our descriptors.

Future Work

The most common question we've been asked about this build process is: What can be done to prevent the adversary from compromising the (substantially weaker) Ubuntu build and packaging processes, and further, what about the Trusting Trust attack?

In terms of eliminating the remaining single points of compromise, the first order of business is to build all of our compilers and toolchain directly from sources via their own Gitian descriptors.

Once this is accomplished, we can begin the process of building identical binaries from multiple different Linux distributions. This would require the adversary to compromise multiple Linux distributions in order to compromise the Tor software distribution.

If we can support reproducible builds through cross compiling from multiple architectures (Intel, ARM, MIPS, PowerPC, etc), this also reduces the likelihood of a Trusting Trust attack surviving unnoticed in the toolchain (because the machine code that injects the payload would have to be pre-compiled and present for all copies of the cross-compiled executable code in a way that is still not visible in the sources).

If those Linux distributions also support reproducible builds of the full build and toolchain environment (both Debian and Fedora have started this), we can eliminate Trusting Trust attacks entirely by using Diverse Double Compilation between multiple independent distribution toolchains, and/or assembly audited compilers. In other words, we could use the distributions' deterministic build processes to verify that identical build environments are produced through Diverse Double Compilation.

As can be seen, much work remains before the system is fully resistant against all forms of malware injection, but even in their current state, reproducible builds are a huge step forward in software build security. We hope this information helps other software distributors to follow the example set by Bitcoin and Tor.

Tor Browser Bundle 3.0alpha4 Released

The third alpha release in the 3.0 series of the Tor Browser Bundle is now available from the Tor Package Archive:
https://archive.torproject.org/tor-package-archive/torbrowser/3.0a4/

This release includes important security updates to Firefox. Here is the complete ChangeLog:

  • All Platforms:
    • Bug #8751: Randomize TLS HELLO timestamp in HTTPS connections
    • Bug #9790 (workaround): Temporarily re-enable JS-Ctypes for cache
      isolation and SSL Observatory

    • Update Firefox to 17.0.9esr
    • Update Tor to 0.2.4.17-rc
    • Update NoScript to 2.6.7.1
    • Update Tor-Launcher to 0.2.2-alpha
      • Bug #9675: Provide feedback mechanism for clock-skew and other early
        startup issues

      • Bug #9445: Allow user to enter bridges with or without 'bridge' keyword
      • Bug #9593: Use UTF16 for Tor process launch to handle unicode paths.
      • misc: Detect when Tor exits and display appropriate notification
    • Update Torbutton to 1.6.2.1
      • Bug 9492: Fix Torbutton logo on OSX and Windows (and related
        initialization code)

      • Bug 8839: Disable Google/Startpage search filters using Tor-specific urls

    As usual these binaries should be exactly reproducible by anyone with Ubuntu and KVM support. To build your own identical copies of these bundles from source code, check out the official repository and use git tag tbb-3.0alpha4-build1 (commit d1fad5a54345d9dad8f8997f2f956d3f4fdeb0f4).

    These instructions should explain things from there. If you notice any differences from the official bundles, I would love to hear about it!

Syndicate content Syndicate content