This blog post is meant to generate a conversation about best practices for using cryptography and privacy by design to improve security and protect user data from well-resourced attackers and oppressive regimes.
The technology industry faces tremendous risks and challenges that it must defend itself against in the coming years. State-sponsored hacking and pressure for backdoors will both increase dramatically, even as soon as early 2017. Faltering diplomacy and faltering trade between the United States and other countries will also endanger the remaining deterrent against large-scale state-sponsored attacks.
Unfortunately, it is also likely that in the United States, current legal mechanisms, such as NSLs and secret FISA warrants, will continue to target the marginalized. This will include immigrants, Muslims, minorities, and even journalists who dare to report unfavorably about the status quo. History is full of examples of surveillance infrastructure being abused for political reasons.
Trust is the currency of the technology industry, and if it evaporates, so will the value of the industry itself. It is wise to get out ahead of this erosion of trust, which has already caused Americans to change online buying habits.
This trust comes from demonstrating the ability to properly handle user data in the face of extraordinary risk. The Tor Project has over a decade of experience managing risk from state and state-sized adversaries in many countries. We want to share this experience with the wider technology community, in the hopes that we can all build a better, safer world together. We believe that the future depends on transparency and openness about the strengths and weaknesses of the technology we build.
To that end, we decided to enumerate some general principles that we follow to design systems that are resistant to coercion, compromise, and single points of failure of all kinds, especially adversarial failure. We hope that these principles can be used to start a wider conversation about current best practices for data management and potential areas for improvement at major tech companies.
Ten Principles for User Protection
1. Do not rely on the law to protect systems or users.
2. Prepare policy commentary for quick response to crisis.
3. Only keep the user data that you currently need.
4. Give users full control over their data.
5. Allow pseudonymity and anonymity.
6. Encrypt data in transit and at rest.
7. Invest in cryptographic R&D to replace non-cryptographic systems.
8. Eliminate single points of security failure, even against coercion.
9. Favor open source and enable user freedom.
10. Practice transparency: share best practices, stand for ethics, and report abuse.
This is the principle from which the others flow. Whether it is foreign hackers, extra-legal entities like organized crime, or the abuse of power in one of the jurisdictions in which you operate, there are plenty of threats outside and beyond the reach of law that can cause harm to your users. It is wise not to assume that the legal structure will keep your users and their data safe from these threats. Only sound engineering and data management practices can do that.
It is common for technologists to take Principle 1 so far that they ignore the law, or at least ignore the political climate in which they operate. It is possible for the law and even for public opinion to turn against technology quickly, especially during a crisis where people do not have time to fully understand the effects of a particular policy on technology.
The technology industry should be prepared to counter bad policy recommendations with coherent arguments as soon as the crisis hits. This means spending time and devoting resources to testing the public's reaction to statements and arguments about policy in focus groups, with lobbyists, and in other demographic testing scenarios, so that we know what arguments will appeal to which audiences ahead of time. It also means having media outlets, talk show hosts, and other influential people ready to back up our position. It is critical to prepare early. When a situation becomes urgent, bad policy often gets implemented quickly, simply because "something must be done".
Excessive personally identifiable data retention is dangerous to users, especially the marginalized and the oppressed. Data that is retained is data that is at risk of compromise or future misuse. As Maciej Ceglowski suggests in his talk Haunted By Data, "First: Don't collect it. But if you have to collect it, don't store it! If you have to store it, don't keep it!"
With enough thought and the right tools, it is possible to engineer your way out of your ability to provide data about specific users, while still retaining the information that is valuable or essential to conduct your business. Examples of applications of this idea are Differential Privacy, PrivEx, the EFF's CryptoLog, and how Tor collects its user metrics. We will discuss this idea further in Principle 7; the research community is exploring many additional methods that could be supported and deployed.
For sensitive data that must be retained in a way that can be associated with an individual user, the ethical thing to do is to give users full control over that data. Users should have the ability to remove data that is collected about themselves, and this process should be easy. Users should be given interfaces that make it clear what type of data is collected about them and how, and they should be given easy ways to migrate, restrict, or remove this data if they wish.
Beyond issues with pseudonymity, the ability to anonymously access information via Tor and VPNs must also be protected and preserved. There is a disturbing trend for automated abuse detection systems to harshly penalize shared IP address infrastructure of all kinds, leading to loss of access.
The Tor Project is working with Cloudflare on both cryptographic and engineering-based solutions to enable Tor users to more easily access websites. We invite interested representatives from other tech companies to help us refine and standardize these solutions, and ensure that these solutions will work for them, too.
With recent policy changes in both the US and abroad, it is more important than ever to encrypt data in transit, so that it does not end up in the dragnet. This means more than just HTTPS. Even intra-datacenter communications should be protected by IPSec or VPN encryption.
As more of our data is encrypted in transit, requests for stored data will likely rise.
Companies can still be compelled to decrypt data that is encrypted with keys that they control. The only way to keep user data truly safe is to provide ways for users to encrypt that data with keys that only those users control.
A common argument against cryptographic solutions for privacy is that the loss of either features, usability, ad targeting, or analytics is in opposition to the business case for the product in question. We believe that this is because the funding for cryptography has not been focused on these needs. In the United States, much of the current cryptographic R&D funding comes from the US military. As Phillip Rogaway pointed out in Part 4 of his landmark paper, The Moral Character of Cryptographic Work, this has created a misalignment between what gets funded versus what is needed in the private sector to keep users' personal data safe in a usable way.
It would be a wise investment for companies that handle large amounts of user data to fund research into potential replacement systems that are cryptographically privacy preserving. It may be the case that a company can be both skillful and lucky enough to retain detailed records and avoid a data catastrophe for several years, but we do not believe it is possible to keep a perfect record forever.
The following are some areas that we think should be explored more thoroughly, in some cases with further research, and in other cases with engineering resources for actual implementations: Searchable encryption, Anonymous Credentials, Private Ad Delivery, Private Location Queries, Private Location Sharing, and PIR in general.
Well-designed cryptographic systems are extremely hard to compromise. Typically, the adversary looks for a way around the cryptography by either exploiting other code on the system, or by coercing one of the parties to divulge either key material or decrypted data. These attacks will naturally target the weakest point of the system - that is a single point of security failure where the fewest number of systems need to be compromised, and where the fewest number of people will notice. The proper engineering response is to ensure that multiple layers of security need to be broken for security to fail, and to ensure that security failure is visible and apparent to the largest possible number of people.
Sandboxing, modularization, vulnerability surface reduction, and least privilege are already established as best practices for improving software security. They also eliminate single points of failure. In combination, they force the adversary to compromise multiple hardened components before the system fails. Compiler hardening is another way to eliminate single points of failure in code bases. Even with memory unsafe languages, it is still possible for the compiler to add additional security layers. We believe that compiler hardening could use more attention from companies who contribute to projects like GCC and clang/llvm, so that the entire industry can benefit. In today's world, we all rely on the security of each other's software, sometimes indirectly, in order to do our work.
When security does fail, we want incidents to be publicly visible. Distributed systems and multi-party/multi-key authentication mechanisms are common ways to ensure this visibility. The Tor consensus protocol is a good example of a system that was deliberately designed such that multiple people must be simultaneously compromised or coerced before security will fail. Reproducible builds are another example of this design pattern. While these types of practices are useful when used internally in an organization, this type of design is more effective when it crosses organizational boundaries - so that multiple organizations need to be compromised to break the security of a system - and most effective when it also crosses cultural boundaries and legal jurisdictions.
We are particularly troubled by the trend towards the use of App Stores to distribute security software and security updates. When each user is personally identifiable to the software update system, that system becomes a perfect vector for backdoors. Globally visible audit logs like Google's General Transparency are one possible solution to this problem. Additionally, the anonymous credentials mentioned in Principle 7 provide a way to authenticate the ability to download an app without revealing the identity of the user, which would make it harder to target specific users with malicious updates.
The Four Software Freedoms are the ability to use, study, share, and improve software.
Open source software that provides these freedoms has many advantages when operating in a hostile environment. It is easier for experts to certify and verify security properties of the software; subtle backdoors are easier to find; and users are free to modify the software to remove any undesired operation.
The most widely accepted argument against backdoors is that they are technically impossible to deploy, because they compromise the security of the system if they are found. A secondary argument is that backdoors can be avoided by the use of alternative systems, or by their removal. Both of these arguments are stronger for open source than for closed source, precisely because of the Four Freedoms.
Unfortunately, not all software is open source. Even for proprietary software, the mechanisms by which we design our systems in order to prevent harm and abuse should be shared publicly in as much detail as possible, so that best practices can be reviewed and adopted more widely. For example, Apple is doing great work adopting cryptography for many of its products, but without specifications for how they are using techniques like differential privacy or iMessage encryption, it is hard to know what protections they are actually providing, if any.
Still, even when the details of their work are not public, the best engineers deeply believe that protecting their users is an ethical obligation, to the point of being prepared to publicly resign from their jobs rather than cause harm.
But, before we get to the point of resignation, it is important that we do our best to design systems that make abuse either impossible or evident. We should then share those designs, and responsibly report any instances of abuse. When abuse happens, inform affected organizations, and protect the information of individual users who were at risk, but make sure that users and the general public will hear about the issue with little delay.
Please Join Us
Ideally, this post will spark a conversation about best practices for data management and the deployment of cryptography in companies around the world.
We hope to use this conversation to generate a list of specific best practices that the industry is already undertaking, as well as to provide a set of specific recommendations based on these principles for companies with which we're most familiar, and whose products will have the greatest impact on users.
If you have specific suggestions, or would like to highlight the work of companies who are already implementing these principles, please mention them in the comments. If your company is already taking actions that are consistent with these principles, either write about that publicly, or contact me directly. We're interested in highlighting positive examples of specific best practices as well as instances where we can all improve, so that we all can work towards user safety and autonomy.
We would like to thank everyone at the Tor Project and the many members of the surrounding privacy and Internet freedom communities who provided review, editorial guidance, and suggestions for this post.
Updates: See the Changes section for a list of changes since initial posting.
After a long wait, the Tor project is happy to announce a refresh of our Tor-enabled Android phone prototype.
This prototype is meant to show a possible direction for Tor on mobile. While I use it myself for my personal communications, it has some rough edges, and installation and update will require familiarity with Linux.
The prototype is also meant to show that it is still possible to replace and modify your mobile phone's operating system while retaining verified boot security - though only just barely. The Android ecosystem is moving very fast, and in this rapid development, we are concerned that the freedom of users to use, study, share, and improve the operating system software on their phones is being threatened. If we lose these freedoms on mobile, we may never get them back. This is especially troubling as mobile access to the Internet becomes the primary form of Internet usage worldwide.
We are trying to demonstrate that it is possible to build a phone that respects user choice and freedom, vastly reduces vulnerability surface, and sets a direction for the ecosystem with respect to how to meet the needs of high-security users. Obviously this is a large task. Just as with our earlier prototype, we are relying on suggestions and support from the wider community.
Help from the Community
When we released our first prototype, the Android community exceeded our wildest expectations with respect to their excitement and contributions. The comments on our initial blog post were filled with helpful suggestions.
Soon after that post went up, Cédric Jeanneret took my Droidwall scripts and adapted them into the very nice OrWall, which is exactly how we think a Tor-enabled phone should work in general. Users should have full control over what information applications can access on their phones, including Internet access, and have control over how that Internet access happens. OrWall provides the networking component of this access control. It allows the user to choose which apps route through Tor, which route through non-Tor, and which can't access the Internet at all. It also has an option to let a specific Voice over IP app, like Signal, bypass Tor for the UDP voice data channel, while still sending call setup information over Tor.
At around the time that our blog post went up, the Copperhead project began producing hardened builds of Android. The hardening features make it more difficult to exploit Android vulnerabilities, and also provides WiFi MAC address randomization, so that it is no longer trivial to track devices using this information.
Copperhead is also the only Android ROM that supports verified boot, which prevents exploits from modifying the boot, system, recovery, and vendor device partitions. Coppherhead has also extended this protection by preventing system applications from being overridden by Google Play Store apps, or from writing bytecode to writable partitions (where it could be modified and infected). This makes Copperhead an excellent choice for our base system.
The Copperhead Tor Phone Prototype
Upon the foundation of Copperhead, Orbot, Orwall, F-Droid, and other community contributions, we have built an installation process that installs a new Copperhead phone with Orbot, OrWall, SuperUser, Google Play, and MyAppList with a list of recommended apps from F-Droid.
We require SuperUser and OrWall instead of using the VPN APIs because the Android VPN APIs are still not as reliable as a firewall in terms of preventing leaks. Without a firewall-based solution, the VPN can leak at boot, or if Orbot is killed or crashes. Additionally, DNS leaks outside of Tor still occur with the VPN APIs on some systems.
We provide Google Play primarily because Signal still requires it, but also because some users probably also want apps from the Play Store. You do not need a Google account to use Signal, but then you need to download the Signal android package and sideload it manually (via adb install).
The need to install these components to the system partition means that we must re-sign the Copperhead image and updates if we want to keep the ability to have system integrity from Verified Boot.
Thankfully, the Nexus Devices supported by Copperhead allow the use of user-generated keys. The installation process simply takes a Copperhead image, installs our additional apps, and signs it with the new keys.
Systemic Threats to Software Freedom
Unfortunately, not only is Copperhead the only Android rebuild that supports Verified Boot, but the Google Nexus/Pixel hardware is the only Android hardware that allows the user to install their own keys to retain both the ability to modify the device, as well as have the filesystem security provided by verified boot.
This, combined with Google's increasing hostility towards Android as a fully Open Source platform, as well as the difficulty for external entities to keep up with Android's surprise release and opaque development processes, means that the ability for end-users to use, study, share, and improve the Android system are all in great jeopardy.
This all means that the Android platform is effectively moving to a "Look but don't touch" Shared Source model that Microsoft tried in the early 2000s. However, instead of being explicit about this, Google appears to be doing it surreptitiously. It is a very deeply disturbing trend.
It is unfortunate that Google seems to see locking down Android as the only solution to the fragmentation and resulting insecurity of the Android platform. We believe that more transparent development and release processes, along with deals for longer device firmware support from SoC vendors, would go a long way to ensuring that it is easier for good OEM players to stay up to date. Simply moving more components to Google Play, even though it will keep those components up to date, does not solve the systemic problem that there are still no OEM incentives to update the base system. Users of old AOSP base systems will always be vulnerable to library, daemon, and operating system issues. Simply giving them slightly more up to date apps is a bandaid that both reduces freedom and does not solve the root security problems. Moreover, as more components and apps are moved to closed source versions, Google is reducing its ability to resist the demand that backdoors be introduced. It is much harder to backdoor an open source component (especially with reproducible builds and binary transparency) than a closed source one.
If Google Play is to be used as a source of leverage to solve this problem, a far better approach would be to use it as a pressure point to mandate that OEMs keep their base system updated. If they fail to do so, their users will begin to lose Google Play functionality, with proper warning that notifies them that their vendor is not honoring their support agreement. In a more extreme version, the Android SDK itself could have compiled code that degrades app functionality or disables apps entirely when the base system becomes outdated.
Another option would be to change the license of AOSP itself to require that any parties that distribute binaries of the base system must provide updates to all devices for some minimum period of time. That would create a legal avenue for class-action lawsuits or other legal action against OEMs that make "fire and forget" devices that leave their users vulnerable, and endanger the Internet itself.
While extreme, both of these options would be preferable to completely giving up on free and open computing for the future of the Internet. Google should be competing on overall Google account integration experience, security, app selection, and media store features. They should use their competitive position to encourage/enforce good OEM behavior, not to create barriers and bandaids that end up enabling yet more fragmentation due to out of date (and insecure) devices.
It is for this reason that we believe that projects like Copperhead are incredibly important to support. Once we lose these freedoms on mobile, we may never get them back. It is especially troubling to imagine a future where mobile access to the Internet is the primary form of Internet usage, and for that usage, all users are forced to choose between having either security or freedom.
The hardware for this prototype is the Google Nexus 6P. While we would prefer to support lower end models for low income demographics, only the Nexus and Pixel lines support Verified Boot with user-controlled keys. We are not aware of any other models that allow this, but we would love to hear if there are any that do.
In theory, installation should work for any of the devices supported by Copperhead, but updating the device will require the addition of an updater-script and an adaptation of the releasetools.py for that device, to convert the radio and bootloader images to the OTA update format.
If you are not allergic to buying hardware online, we highly recommend that you order them from the Copperhead store. The devices are shipped with tamper-evident security tape, for what it's worth. Otherwise, if you're lucky, you might still be able to find a 6P at your local electronics retail store. Please consider donating to Copperhead anyway. The project is doing everything right, and could use your support.
Hopefully, we can add support for the newer Pixel devices as soon as AOSP (and Copperhead) supports them, too.
Before you dive in, remember that this is a prototype, and you will need to be familiar with Linux.
The run_all.sh script should walk you through a series of steps, printing out instructions for unlocking the phone and flashing the system. Please read the instructions in the repository for full installation details.
The very first device boot after installation will take a while, so be patient. During this boot, you should note the fingerprint of your key on the yellow boot splash screen. That fingerprint is what authenticates the use of your key and the rest of the boot process.
Once the system is booted, after you have given Google Play Services the Location and Storage permissions (as per the instructions printed by the script), make sure you set the Date and Time accurately, or Orbot will not be able to connect to the Tor Network.
Then, you can start Orbot, and allow F-Droid, Download Manager, the Copperhead updater, Google Play Services (if you want to use Signal), and any other apps you want to access the network.
NOTE: To keep Orbot up to date, you will have to go into F-Droid Repositories option, and click Guardian Project Official Releases.
Installation: F-Droid apps
Once you have networking and F-Droid working, you can use MyAppList to install apps from F-Droid. Our installation provides a list of useful apps for MyAppList. The MyAppsList app will allow you to select the subset you want, and install those apps in succession by invoking F-Droid. Start this process by clicking on the upward arrow at the bottom right of the screen:
Alternately, you can add links to additional F-Droid packages in the apk url list prior to running the installation, and they will be downloaded and installed during run_all.sh.
NOTE: Do not update OrWall past 1.1.0 via F-Droid until issue 121 is fixed, or networking will break.
Signal is one of the most useful communications applications to have on your phone. Unfortunately, despite being open source itself, Signal is not included in F-Droid, for historical reasons. Near as we can tell, most of the issues behind the argument have actually been since resolved. Now that Signal is reproducible, we see no reason why it can't be included in some F-Droid repo, if not the F-Droid repo, so long as it is the same Signal with the same key. It is unfortunate to see so much disagreement over this point, though. Even if Signal won't make the criterion for the official F-Droid repo (or wherever that tirefire of a flamewar is at right now), we wish that at the very least it could meet the criterion for an alternate "Non-Free" repo, much like the Debian project provides. Nothing is preventing the redistribution of the official Signal apk.
For now, if you do not wish to use a Google account with Google Play, it is possible to download the Signal apks from one of the apk mirror sites (such as APK4fun, apkdot.com, or apkplz.com). To ensure that you have the official Signal apk, perform the following:
- Download the apk.
- Unzip the apk with unzip org.thoughtcrime.securesms.apk
- Verify that the signing key is the official key with keytool -printcert -file META-INF/CERT.RSA
- You should see a line with SHA256: 29:F3:4E:5F:27:F2:11:B4:24:BC:5B:F9:D6:71:62:C0 EA:FB:A2:DA:35:AF:35:C1:64:16:FC:44:62:76:BA:26
- Make sure that fingerprint matches (the space was added for formatting).
- Verify that the contents of that APK are properly signed by that cert with: jarsigner -verify org.thoughtcrime.securesms.apk. You should see jar verified printed out.
Then, you can install the Signal APK via adb with adb install org.thoughtcrime.securesms.apk. You can verify you're up to date with the version in the app store with ApkTrack.
For voice calls to work, select Signal as the SIP application in OrWall, and allow SIP access.
Because Verified Boot ensures filesystem integrity at the device block level, and because we modify the root and system filesystems, normal over the air updates will not work. The fact that we use different device keys will prevent the official updates from installing at all, but even if they did, they would remove the installation of Google Play, SuperUser, and the OrWall initial firewall script.
When the phone notifies you of an update, you should instead download the latest Copperhead factory image to the mission-improbable working directory, and use update.sh to convert it into a signed update zip that will get sideloaded and installed by the recovery. You need to have the same keys from the installation in the keys subdirectory.
The update.sh script should walk you through this process.
Updates may also reset the system clock, which must be accurate for Orbot to connect to the Tor network. If this happens, you may need to reset the clock manually under Date and Time Settings
I use this prototype for all of my personal communications - Email, Signal, XMPP+OTR, Mumble, offline maps and directions in OSMAnd, taking pictures, and reading news and books. I use Intent Intercept to avoid accidentally clicking on links, and to avoid surprising cross-app launching behavior.
For Internet access, I personally use a secondary phone that acts as a router for this phone while it is in airplane mode. That phone has an app store and I use it for less trusted, non-private applications, and for emergency situations should a bug with the device prevent it from functioning properly. However, it is also possible to use a cheap wifi cell router, or simply use the actual cell capabilities on the phone itself. In that case, you may want to look into CSipSimple, and a VoIP provider, but see the Future Work section about potential snags with using SIP and Signal at the same time.
I also often use Google Voice or SIP numbers instead of the number of my actual phone's SIM card just as a general protection measure. I give people this number instead of the phone number of my actual cell device, to prevent remote baseband exploits and other location tracking attacks from being trivial to pull off from a distance. This is a trade-off, though, as you are trusting the VoIP provider with your voice data, and on top of this, many of them do not support encryption for call signaling or voice data, and fewer still support SMS.
For situations where using the cell network at all is either undesirable or impossible (perhaps because it is disabled due to civil unrest), the mesh network messaging app Rumble shows a lot of promise. It supports both public and encrypted groups in a Twitter-like interface run over either a wifi or bluetooth ad-hoc mesh network. It could use some attention.
Like the last post on the topic, this prototype obviously has a lot of unfinished pieces and unpolished corners. We've made a lot of progress as a community on many of the future work items from that last post, but many still remain.
Future work: More Device Support
As mentioned above, installation should work on all devices that Copperhead supports out of the box. However, updates require the addition of an updater-script and an adaptation of the releasetools.py for that device, to convert the radio and bootloader images to the OTA update format.
Future Work: MicroG support
Instead of Google Play Services, it might be nice to provide the Open Source MicroG replacements. This requires some hackery to spoof the Google Play Service Signature field, though. Unfortunately, this method creates a permission that any app can request to spoof signatures for any service. We'd be much happier about this if we could find a way for MicroG to be the only app to be able to spoof permissions, and only for the Google services it was replacing. This may be as simple as hardcoding those app ids in an updated version of one of these patches.
Future Work: Netfilter API (or better VPN APIs)
Back in the WhisperCore days, Moxie wrote a Netfilter module using libiptc that enabled apps to edit iptables rules if they had permissions for it. This would eliminate the need for iptables shell callouts for using OrWall, would be more stable and less leaky than the current VPN APIs, and would eliminate the need to have root access on the device (which is additional vulnerability surface). That API needs to be dusted off and updated for the Copperhead compatibility, and then Orwall would need to be updated to use it, if present.
Alternatively, the VPN API could be used, if there were ways to prevent leaks at boot, DNS leaks, and leaks if the app is killed or crashes. We'd also want the ability to control specific app network access, and allow bypass of UDP for VoIP apps.
Future Work: Fewer Binary Blobs
There are unfortunately quite a few binary blobs extracted from the Copperhead build tree in the repository. They are enumerated in the README. This was done for expedience. Building some of those components outside of the android build tree is fairly difficult. We would happily accept patches for this, or for replacement tools.
Future Work: F-Droid auto-updates, crash reporting, and install count analytics
These requests come from Moxie. Having these would make him much happier about F-Droid Signal installs.
It turns out that F-Droid supports full auto-updates with the Priviledged Extension, which Copperhead is working on including.
Future Work: Build Reproducibility
Copperhead itself is not yet built reproducibly. It's our opinion that this is the AOSP's responsibility, though. If it's not the core team at Google, they should at least fund Copperhead or some other entity to work on it for them. Reproducible builds should be an organizational priority for all software companies. Moreover, in combination with free software, they are an excellent deterrent against backdoors.
In this brave new world, even if we can trust that the NSA won't be ordered to attack American companies to insert backdoors, deteriorating relationships with China and other state actors may mean that their incentives to hold back on such attacks will be greatly reduced. Closed source components can also benefit from reproducible builds, since compromising multiple build systems/build teams is inherently harder than compromising just one.
Future Work: Orbot Stability
Unfortunately, the stability of Orbot itself still leaves a lot to be desired. It is fairly fragile to network disconnects. It often becomes stuck in states that require you to go into the Android Settings for Apps, and then Force Stop Orbot in order for it to be able to reconnect properly. The startup UI is also fragile to network connectivity.
Worse: If you tap the start button either too hard or multiple times while the network is disconnected or while the phone's clock is out of sync, Orbot can become confused and say that it is connected when it is not. Luckily, because the Tor network access security is enforce by Orwall (and the Android kernel), instabilities in Orbot do not risk Tor leaks.
Future Work: Backups and Remote Wipe
Unfortunately, backups are an unsolved problem. In theory, adb backup -all should work, but even the latest adb version from the official Android SDK appears to only backup and restore partial data. Apparently this is due to adb obeying manifest restrictions on apps that request not to be backed up. For the purposes of full device backup, it would be nice to have an adb version that really backed up everything.
Instead, I use the export feature of K-9 Mail, Contacts, and the Calendar Import-Export app to export that data to /sdcard, and then adb pull /sdcard. It would be nice to have an end-to-end encrypted remote backup app, though. Flock had promise, but was unfortunately discontinued.
Similarly, if a phone is lost, it would be nice to have a cryptographically secure remote wipe feature.
Future Work: Baseband Analysis (and Isolation)
Until phones with auditable baseband isolation are available (the Neo900 looks like a promising candidate), the baseband remains a problem on all of these phones. It is unknown if vulnerabilities or backdoors in the baseband can turn on the mic, make silent calls, or access device memory. Using a portable hotspot or secondary insecure phone is one option for now, but it is still unknown if the baseband is fully disabled in airplane mode. In the previous post, commenters recommended wiping the baseband, but on most phones, this seems to also disable GPS.
Future Work: Wifi AP Scanning Prevention
Copperhead may randomize the MAC address, but it is quite likely that it still tries to connect to configured APs, even if they are not there (see these two XDA threads). This can reveal information about your home and work networks, and any other networks you have configured.
Future Work: Port Tor Browser to Android
The Guardian Project is undertaking a port of Tor Browser to Android as part of their OrFox project. This port is still incomplete, however. The Tor Project is working on obtaining funding to bring it on par with the desktop Tor Browser.
Future Work: Better SIP Support
Right now, it is difficult to use two or more SIP clients in OrWall. You basically have to switch between them in the settings, which is also fragile and error prone. It would be ideal if OrWall allowed multiple SIP apps to be selected.
Additionally, SIP providers and SIP clients have very poor support for TLS and SRTP encryption for call setup and voice data. I could find only two such providers that advertised this support, but I was unable to actually get TLS and SRTP working with CSipSimple or LinPhone for either of them.
Future Work: Installation and full OTA updates without Linux
In order for this to become a real end-user phone, we need to remove the requirement to use Linux in order to install and update it. Unfortunately, this is tricky. Technically, Google Play can't be distributed in a full Android firmware, so we'd have to get special approval for that. Alternatively, we could make the default install use MicroG, as above. In either case, it should just be a matter of taking the official Copperhead builds, modifying them, changing the update URL, and shipping those devices with Google Play/MicroG and the new OTA location. Copperhead or Tor could easily support multiple device install configurations this way without needing to rebuild everything for each one. So legal issues aside, users could easily have their choice of MicroG, Google Play, or neither.
Personally, I think the demand is higher for some level of Google account integration functionality than what MicroG provides, so it would be nice to find some way to make that work. But there are solid reasons for avoiding the use of a Google account (such as Google's mistreatment of Tor users, the unavailability of Google in certain areas of the world due to censorship of Google, and the technical capability of Google Play to send targeted backdoored versions of apps to specific accounts).
Future Work: Better Boot Key Representation/Authentication
The truncated fingerprint is not the best way to present a key to the user. It is both too short for security, and too hard to read. It would be better to use something like the SSH Randomart representation, or some other visual representation that encodes a cryptographically strong version of the key fingerprint, and asks the user to click through it to boot. Though obviously, if this boot process can also be modified, this may be insufficient.
Future Work: Faster GPS Lock
The GPS on these devices is device-only by default, which can mean it is very slow. It would be useful to find out if µg UnifiedNlp can help, and which of its backends are privacy preserving enough to recommend/enable by default.
Future Work: Sensor Management/Removal
As pointed out in great detail in one of the comments below, these devices have a large number of sensors on them that can be used to create side channels, gather information about the environment, and send it back. The original Mission Impossible post went into quite a bit of detail about how to remove the microphone from the device. This time around, I focused on software security. But like the commentor suggested, you can still go down the hardware modding rabbithole if you like. Just search YouTube for teardown nexus 6P, or similar.
Like the last post, this post will likely be updated for a while based on community feedback. Here is the list of those changes so far.
- Added information about secondary SIP/VoIP usage in the Usage section and the Future Work sections.
- Added a warning not to upgrade OrWall until Issue 121 is fixed.
- Describe how we could remove the Linux requirement and have OTA updates, as a Future Work item.
- Remind users to check their key fingerprint at installation and boot, and point out in the Future Work section that this UI could be better.
- Mention the Neo900 in the Future Work: Baseband Isolation section
- Wow, the Signal vs F-Droid issue is a stupid hot mess. Can't we all just get along and share the software? Don't make me sing the RMS song, people... I'll do it...
- Added a note that you need the Guardian Project F-Droid repo to update Orbot.
- Add a thought to the Systemic Threats to Software Freedom section about using licensing to enforce the update requirement in order to use the AOSP.
- Mention ApkTrack for monitoring for Signal updates, and Intent Intercept for avoiding risky clicks.
- Mention alternate location providers as Future Work, and that we need to pick a decent backend.
- Link to Conversations and some other apps in the usage section. Also add some other links here and there.
- Mention that Date and Time must be set correctly for Orbot to connect to the network.
- Added a link to Moxie's netfilter code to the Future Work section, should anyone want to try to dust it off and get it working with Orwall.
- Use keytool instead of sha256sum to verify the Signal key's fingerprint. The CERT.RSA file is not stable across versions.
- The latest Orbot 15.2.0-rc8 still has issues claiming that it is connected when it is not. This is easiest to observe if the system clock is wrong, but it can also happen on network disconnects.
- Add a Future Work section for sensor management/removal
Wednesday, CloudFlare blogged that 94% of the requests it sees from Tor are "malicious." We find that unlikely, and we've asked CloudFlare to provide justification to back up this claim. We suspect this figure is based on a flawed methodology by which CloudFlare labels all traffic from an IP address that has ever sent spam as "malicious." Tor IP addresses are conduits for millions of people who are then blocked from reaching websites under CloudFlare's system.
We're interested in hearing CloudFlare's explanation of how they arrived at the 94% figure and why they choose to block so much legitimate Tor traffic. While we wait to hear from CloudFlare, here's what we know:
1) CloudFlare uses an IP reputation system to assign scores to IP addresses that generate malicious traffic. In their blog post, they mentioned obtaining data from Project Honey Pot, in addition to their own systems. Project Honey Pot has an IP reputation system that causes IP addresses to be labeled as "malicious" if they ever send spam to a select set of diagnostic machines that are not normally in use. CloudFlare has not described the nature of the IP reputation systems they use in any detail.
2) External research has found that CloudFlare blocks at least 80% of Tor IP addresses, and this number has been steadily increasing over time.
3) That same study found that it typically took 30 days for an event to happen that caused a Tor IP address to acquire a bad reputation and become blocked, but once it happens, innocent users continued to be punished for it for the duration of the study.
4) That study also showed a disturbing increase over time in how many IP addresses CloudFlare blocked without removal. CloudFlare's approach to blocking abusive traffic is incurring a large amount of false positives in the form of impeding normal traffic, thereby damaging the experience of many innocent Tor and non-Tor Internet users, as well as impacting the revenue streams of CloudFlare's own customers by causing frustrated or blocked users to go elsewhere.
5) A report by CloudFlare competitor Akamai found that the percentage of legitimate e-commerce traffic originating from Tor IP addresses is nearly identical to that originating from the Internet at large. (Specifically, Akamai found that the "conversion rate" of Tor IP addresses clicking on ads and performing commercial activity was "virtually equal" to that of non-Tor IP addresses).
CloudFlare disagrees with our use of the word "block" when describing its treatment of Tor traffic, but that's exactly what their system ultimately does in many cases. Users are either blocked outright with CAPTCHA server failure messages, or prevented from reaching websites with a long (and sometimes endless) loop of CAPTCHAs, many of which require the user to understand English in order to solve correctly. For users in developing nations who pay for Internet service by the minute, the problem is even worse as the CAPTCHAs load slowly and users may have to solve dozens each day with no guarantee of reaching a particular site. Rather than waste their limited Internet time, such users will either navigate away, or choose not to use Tor and put themselves at risk.
Also see our new fact sheet about CloudFlare and Tor: https://people.torproject.org/~lunar/20160331-CloudFlare_Fact_Sheet.pdf
The Tor Project exists to provide privacy and anonymity for millions of people, including human rights defenders across the globe whose lives depend on it. The strong encryption built into our software is essential for their safety.
In an age when people have so little control over the information recorded about their lives, we believe that privacy is worth fighting for.
We therefore stand with Apple to defend strong encryption and to oppose government pressure to weaken it. We will never backdoor our software.
Our users face very serious threats. These users include bloggers reporting on drug violence in Latin America; dissidents in China, Russia, and the Middle East; police and military officers who use our software to keep themselves safe on the job; and LGBTI individuals who face persecution nearly everywhere. Even in Western societies, studies demonstrate that intelligence agencies such as the NSA are chilling dissent and silencing political discourse merely through the threat of pervasive surveillance.
For all of our users, their privacy is their security. And for all of them, that privacy depends upon the integrity of our software, and on strong cryptography. Any weakness introduced to help a particular government would inevitably be discovered and could be used against all of our users.
The Tor Project employs several mechanisms to ensure the security and integrity of our software. Our primary product, the Tor Browser, is fully open source. Moreover, anyone can obtain our source code and produce bit-for-bit identical copies of the programs we distribute using Reproducible Builds, eliminating the possibility of single points of compromise or coercion in our software build process. The Tor Browser downloads its software updates anonymously using the Tor network, and update requests contain no identifying information that could be used to deliver targeted malicious updates to specific users. These requests also use HTTPS encryption and pinned HTTPS certificates (a security mechanism that allows HTTPS websites to resist being impersonated by an attacker by specifying exact cryptographic keys for sites). Finally, the updates themselves are also protected by strong cryptography, in the form of package-level cryptographic signatures (the Tor Project signs the update files themselves). This use of multiple independent cryptographic mechanisms and independent keys reduces the risk of single points of failure.
The Tor Project has never received a legal demand to place a backdoor in its programs or source code, nor have we received any requests to hand over cryptographic signing material. This isn't surprising: we've been public about our "no backdoors, ever" stance, we've had clear public support from our friends at EFF and ACLU, and it's well-known that our open source engineering processes and distributed architecture make it hard to add a backdoor quietly.
From an engineering perspective, our code review and open source development processes make it likely that such a backdoor would be quickly discovered. We are also currently accelerating the development of a vulnerability-reporting reward program to encourage external software developers to look for and report any vulnerabilities that affect our primary software products.
The threats that Apple faces to hand over its cryptographic signing keys to the US government (or to sign alternate versions of its software for the US government) are no different than threats of force or compromise that any of our developers or our volunteer network operators may face from any actor, governmental or not. For this reason, regardless of the outcome of the Apple decision, we are exploring further ways to eliminate single points of failure, so that even if a government or a criminal obtains our cryptographic keys, our distributed network and its users would be able to detect this fact and report it to us as a security issue.
Like those at Apple, several of our developers have already stated that they would rather resign than honor any request to introduce a backdoor or vulnerability into our software that could be used to harm our users. We look forward to making an official public statement on this commitment as the situation unfolds. However, since requests for backdoors or cryptographic key material so closely resemble many other forms of security failure, we remain committed to researching and developing engineering solutions to further mitigate these risks, regardless of their origin.
We congratulate Apple on their commitment to the privacy and security of their users, and we admire their efforts to advance the debate over the right to privacy and security for all.
1. What are your priorities for onion services development?
Personally I think it’s very important to work on the security of hidden services; that’s a big priority.
The plan for the next generation of onion services includes enhanced security as well as improved performance. We’ve broken the development down into smaller modules and we’re already starting to build the foundation. The whole thing is a pretty insane engineering job.
2. What don't people know about onion Services?
Until earlier this year, hidden services were a labor of love that Tor developers did in their spare time. Now we have a very small group of developers, but in 2016 we want to move the engineering capacity a bit farther out. There is a lot of enthusiasm within Tor for hidden services but we need funding and more high level developers to build the next generation.
3. What are some of Tor's plans for mitigating attacks?
The CMU attack was fundamentally a "guard node" attack; guard nodes are the first hop of a Tor circuit and hence the only part of the network that can see the real IP address of a hidden service. Last July we fixed the attack vector that CMU was using (it was called the RELAY_EARLY confirmation attack) and since then we've been divising improved designs for guard node security.
For example, in the past, each onion service would have three guard nodes assigned to it. Since last September, each onion service only uses one guard node—-it exposes itself to fewer relays. This change alone makes an attack against an onion service much less likely.
Several of our developers are thinking about how to do better guard node selection. One of us is writing code on this right now.
We are modeling how onion services pick guard nodes currently, and we're simulating other ways to do it to see which one exposes itself to fewer relays—the fewer relays you are exposed to, the safer you are.
We’ve also been working on other security things as well. For instance, a series of papers and talks have abused the directory system of hidden services to try to estimate the activity of particular hidden services, or to launch denial-of-service attacks against hidden services.
We’re going to fix this by making it much harder for the attacker's nodes to become the responsible relay of a hidden service (say, catfacts) and be able to track uptime and usage information. We will use a "distributed random number generator"--many computers teaming up to generate a single, fresh unpredictable random number.
Another important thing we're doing is to make it impossible for a directory service to harvest addresses in the new design. If you don't know a hidden service address, then under the new system, you won't find it out just by hosting its HSDir entry.
There are also interesting performance things: We want to make .onion services scalable in large infrastructures like Facebook--we want high availability and better load balancing; we want to make it serious.
[Load balancing distributes the traffic load of a website to multiple servers so that no one server gets overloaded with all the users. Overloaded servers stop responding and create other problems. An attack that purposely overloads a website to cause it to stop responding is called a Denial of Service (DoS) attack. - Kate]
There are also onion services that don’t care to stay hidden, like Blockchain or Facebook; we can make those much faster, which is quite exciting.
Meanwhile Nick is working on a new encryption design--magic circuit crypto that will make it harder to do active confirmation attacks. [Nick Mathewson is the co-founder of the Tor Project and the chief architect of our software.] Active confirmation attacks are much more powerful than passive attacks, and we can do a better job at defending against them.
A particular type of confirmation attack that Nick's new crypto is going to solve is a "tagging attack"—Roger wrote a blog post about them years ago called, "One Cell Is Enough"—it was about how they work and how they are powerful.
4. Do you run an onion service yourself?
Yes, I do run onion services; I run an onion services on every box I have. I connect to the PC in my house from anywhere in the world through SSH—I connect to my onion service instead of my house IP. People can see my laptop accessing Tor but don’t know who I am or where I go.
Also, onion services have a property called NAT-punching; (NAT=Network Address Translation). NAT blocks incoming connections;it builds walls around you. Onion services have NAT punching and can penetrate a firewall. In my university campus, the firewall does not allow incoming connections to my SSH server, but with an onion service the firewall is irrelevant.
5. What is your favorite onion service that a nontechnical person might use?
I use ricochet for my peer to peer chatting--It has a very nice UI and works well.
6. Do you think it’s safe to run an onion service?
It depends on your adversary. I think onion services provide adequate security against most real life adversaries.
However, if a serious and highly motivated adversary were after me, I would not rely solely on the security of onion services. If your adversary can wiretap the whole Western Internet, or has a million dollar budget, and you only depend on hidden services for your anonymity then you should probably up your game. You can add more layers of anonymity by buying the servers you host your hidden service on anonymously (e.g. with bitcoin) so that even if they deanonymize you, they can't get your identity from the server. Also studying and actually understanding the Tor protocol and its threat model is essential practice if you are defending against motivated adversaries.
7. What onion services don’t exist yet that you would like to see?
Onion services right now are super-volatile; they may appear for three months and then they disappear. For example, there was a Twitter clone, Tor statusnet; it was quite fun--small but cozy. The guy or girl who was running it couldn’t do it any longer. So, goodbye! It would be very nice to have a Twitter clone in onion services. Everyone would be anonymous. Short messages by anonymous people would be an interesting thing.
I would like to see apps for mobile phones using onion services more—SnapChat over Tor, Tinder over Tor—using Orbot or whatever.
A good search engine for onion services. This volatility comes down to not having a search engine—you could have a great service, but only 500 sketchoids on the Internet might know about it.
Right now, hidden services are misty and hard to see, with the fog of war all around. A sophisticated search engine could highlight the nice things and the nice communities; those would get far more traffic and users and would stay up longer.
The second question is how you make things. For many people, it’s not easy to set up an onion service. You have to open Tor, hack some configuration files, and there's more.
We need a system where you double click, and bam, you have an onion service serving your blog. Griffin Boyce is developing a tool for this named Stormy. If we have a good search engine and a way for people to start up onion services easily, we will have a much nicer and more normal Internet in the onion space.
8. What is the biggest misconception about onion services?
People don't realize how many use cases there are for onion services or the inventive ways that people are using them already. Only a few onion services ever become well known and usually for the wrong reasons.
I think it ties back to the previous discussion--—the onion services we all enjoy have no way of getting to us. Right now, they are marooned on their island of hiddenness.
9. What is the biggest misconception about onion services development?
It’s a big and complex project—it’s building a network inside a network; building a thing inside a thing. But we are a tiny team. We need the resources and person power to do it.