Blogs

What's new in Tor 0.2.9.8?

Today, we've released the first stable version of the 0.2.9.x series, bringing exciting new features to Tor. The series has seen 1406 commits from 32 different contributors. Please, see the ChangeLog for more details about what has been done.

This post will outline three features (among many other things) that we are quite proud of and want to describe in more detail.

Single Onion Service

Over the past several years, we've collaborated with many large scale service providers such as Facebook and Riseup, organizations that deployed Onion Services to improve their performance.

Onion services are great because they offer both anonymity on the service and the client side. However, there are cases where the onion service does not require anonymity. The main example of this is when the service provider does not need to hide the location of its servers.

As a reminder to the reader, an onion service connection between a client and a service goes through 6 hops, while a regular connection with Tor is 3 hops. Onion services are much slower than regular Tor connections because of this.

Today, we are introducing Single Onion Services! With this new feature, a service can now specify in its configuration file that it does not need anonymity, thus cutting the 3 hops between the service and its Rendezvous Point and speeding up the connection.

For security reasons, if this option is enabled, only single onion service can be configured. They can't coexist with a regular onion service. Because this removes the anonymity aspect of the service, we took extra precautions so that it's very difficult to enable a single onion by mistake. In your torrc file, here is how you do it:


HiddenServiceNonAnonymousMode 1
HiddenServiceSingleHopMode 1


Please read about these options before you enable them in the manual page.

Shared Randomness

We've talked about this before but now it is a reality. At midnight UTC every day, directory authorities collectively generate a global random value that cannot be predicted in advance. This daily fresh random value is the foundation of our next generation onion service work coming soon to a Tor near you.

In the consensus file, they will look like this; if all goes well, at 00:00 UTC, consensus will have a new one:


shared-rand-current-value Hq+hGlzwAVetJ2zkO70riH/SEMNri+c7Ps8xERZ3a0o=
shared-rand-previous-value CY5TncVAltDpkBKZUBYT1canvqmVoNuweiKVZIilHfs=


Thanks to atagar, the Stem Library version 1.5.0 supports parsing the shared random values from the consensus. See here for more information!

Voluntarily, we haven't exposed those values to the control port yet and will wait for a full stable release cycle in order to make sure it's stable enough for a third party application to use them (https://trac.torproject.org/19925).

Mandatory ntor handshake

This is another important security feature introduced in the new release. Authorities, relays and clients now require ntor keys in all descriptors, for all hops, for all circuits, and for all other roles.

In other words, except for onion services (and this will be addressed with the next generation), only ntor is used--now finally dropping the TAP handshake.

This results in better security for the overall network and users.


Enjoy this new release!

Tor 0.3.0.1-alpha: A new alpha series begins

Now that Tor 0.2.9.8 is stable, it's time to release a new alpha series for testing and bug-hunting!

Tor 0.3.0.1-alpha is the first alpha release in the 0.3.0 development series. It strengthens Tor's link and circuit handshakes by identifying relays by their Ed25519 keys, improves the algorithm that clients use to choose and maintain their list of guards, and includes additional backend support for the next-generation hidden service design. It also contains numerous other small features and improvements to security, correctness, and performance.

You can download the source from the usual place on the website. Packages should be available over the next weeks, including an alpha TorBrowser release some time in January.

Please note: This is an alpha release. Please expect more bugs than usual. If you want a stable experience, please stick to the stable releases.

Below are the changes since 0.2.9.8.

Changes in version 0.3.0.1-alpha - 2016-12-19

  • Major features (guard selection algorithm):
    • Tor's guard selection algorithm has been redesigned from the ground up, to better support unreliable networks and restrictive sets of entry nodes, and to better resist guard-capture attacks by hostile local networks. Implements proposal 271; closes ticket 19877.
  • Major features (next-generation hidden services):
    • Relays can now handle v3 ESTABLISH_INTRO cells as specified by prop224 aka "Next Generation Hidden Services". Service and clients don't use this functionality yet. Closes ticket 19043. Based on initial code by Alec Heifetz.
    • Relays now support the HSDir version 3 protocol, so that they can can store and serve v3 descriptors. This is part of the next- generation onion service work detailled in proposal 224. Closes ticket 17238.
  • Major features (protocol, ed25519 identity keys):
    • Relays now use Ed25519 to prove their Ed25519 identities and to one another, and to clients. This algorithm is faster and more secure than the RSA-based handshake we've been doing until now. Implements the second big part of proposal 220; Closes ticket 15055.
    • Clients now support including Ed25519 identity keys in the EXTEND2 cells they generate. By default, this is controlled by a consensus parameter, currently disabled. You can turn this feature on for testing by setting ExtendByEd25519ID in your configuration. This might make your traffic appear different than the traffic generated by other users, however. Implements part of ticket 15056; part of proposal 220.
    • Relays now understand requests to extend to other relays by their Ed25519 identity keys. When an Ed25519 identity key is included in an EXTEND2 cell, the relay will only extend the circuit if the other relay can prove ownership of that identity. Implements part of ticket 15056; part of proposal 220.

  read more »

Tor 0.2.9.8 is released: finally, a new stable series!

Tor 0.2.9.8 is the first stable release of the Tor 0.2.9 series.

The Tor 0.2.9 series makes mandatory a number of security features that were formerly optional. It includes support for a new shared- randomness protocol that will form the basis for next generation hidden services, includes a single-hop hidden service mode for optimizing .onion services that don't actually want to be hidden, tries harder not to overload the directory authorities with excessive downloads, and supports a better protocol versioning scheme for improved compatibility with other implementations of the Tor protocol.

And of course, there are numerous other bugfixes and improvements.

This release also includes a fix for a medium-severity issue (bug 21018 below) where Tor clients could crash when attempting to visit a hostile hidden service. Clients are recommended to upgrade as packages become available for their systems.

You can download the source code from the usual place on the website. Packages should be up within the next few days, with a
TorBrowser release planned for early January.

Below are listed the changes since Tor 0.2.8.11. For a list of changes since 0.2.9.7-rc, see the ChangeLog file.

Changes in version 0.2.9.8 - 2016-12-19

  • New system requirements:
    • When building with OpenSSL, Tor now requires version 1.0.1 or later. OpenSSL 1.0.0 and earlier are no longer supported by the OpenSSL team, and should not be used. Closes ticket 20303.
    • Tor now requires Libevent version 2.0.10-stable or later. Older versions of Libevent have less efficient backends for several platforms, and lack the DNS code that we use for our server-side DNS support. This implements ticket 19554.
    • Tor now requires zlib version 1.2 or later, for security, efficiency, and (eventually) gzip support. (Back when we started, zlib 1.1 and zlib 1.0 were still found in the wild. 1.2 was released in 2003. We recommend the latest version.)
  • Deprecated features:
    • A number of DNS-cache-related sub-options for client ports are now deprecated for security reasons, and may be removed in a future version of Tor. (We believe that client-side DNS caching is a bad idea for anonymity, and you should not turn it on.) The options are: CacheDNS, CacheIPv4DNS, CacheIPv6DNS, UseDNSCache, UseIPv4Cache, and UseIPv6Cache.
    • A number of options are deprecated for security reasons, and may be removed in a future version of Tor. The options are: AllowDotExit, AllowInvalidNodes, AllowSingleHopCircuits, AllowSingleHopExits, ClientDNSRejectInternalAddresses, CloseHSClientCircuitsImmediatelyOnTimeout, CloseHSServiceRendCircuitsImmediatelyOnTimeout, ExcludeSingleHopRelays, FastFirstHopPK, TLSECGroup, UseNTorHandshake, and WarnUnsafeSocks.
    • The *ListenAddress options are now deprecated as unnecessary: the corresponding *Port options should be used instead. These options may someday be removed. The affected options are: ControlListenAddress, DNSListenAddress, DirListenAddress, NATDListenAddress, ORListenAddress, SocksListenAddress, and TransListenAddress.

  read more »

Tor 0.2.8.12 is released

There's a new "old stable" release of Tor! (But maybe you want the 0.2.9.8 release instead; that also comes out today.)

Tor 0.2.8.12 backports a fix for a medium-severity issue (bug 21018 below) where Tor clients could crash when attempting to visit a hostile hidden service. Clients are recommended to upgrade as packages become available for their systems.

It also includes an updated list of fallback directories, backported from 0.2.9.

Now that the Tor 0.2.9 series is stable, only major bugfixes will be backported to 0.2.8 in the future.

You can download Tor 0.2.8 -- and other older release series -- from dist.torproject.org.

Changes in version 0.2.8.12 - 2016-12-19

  • Major bugfixes (parsing, security, backported from 0.2.9.8):
    • Fix a bug in parsing that could cause clients to read a single byte past the end of an allocated region. This bug could be used to cause hardened clients (built with --enable-expensive-hardening) to crash if they tried to visit a hostile hidden service. Non- hardened clients are only affected depending on the details of their platform's memory allocator. Fixes bug 21018; bugfix on 0.2.0.8-alpha. Found by using libFuzzer. Also tracked as TROVE- 2016-12-002 and as CVE-2016-1254.
  • Minor features (fallback directory list, backported from 0.2.9.8):
    • Replace the 81 remaining fallbacks of the 100 originally introduced in Tor 0.2.8.3-alpha in March 2016, with a list of 177 fallbacks (123 new, 54 existing, 27 removed) generated in December 2016. Resolves ticket 20170.
  • Minor features (geoip, backported from 0.2.9.7-rc):
    • Update geoip and geoip6 to the December 7 2016 Maxmind GeoLite2 Country database.

Technology in Hostile States: Ten Principles for User Protection

This blog post is meant to generate a conversation about best practices for using cryptography and privacy by design to improve security and protect user data from well-resourced attackers and oppressive regimes.

The technology industry faces tremendous risks and challenges that it must defend itself against in the coming years. State-sponsored hacking and pressure for backdoors will both increase dramatically, even as soon as early 2017. Faltering diplomacy and faltering trade between the United States and other countries will also endanger the remaining deterrent against large-scale state-sponsored attacks.

Unfortunately, it is also likely that in the United States, current legal mechanisms, such as NSLs and secret FISA warrants, will continue to target the marginalized. This will include immigrants, Muslims, minorities, and even journalists who dare to report unfavorably about the status quo. History is full of examples of surveillance infrastructure being abused for political reasons.

Trust is the currency of the technology industry, and if it evaporates, so will the value of the industry itself. It is wise to get out ahead of this erosion of trust, which has already caused Americans to change online buying habits.

This trust comes from demonstrating the ability to properly handle user data in the face of extraordinary risk. The Tor Project has over a decade of experience managing risk from state and state-sized adversaries in many countries. We want to share this experience with the wider technology community, in the hopes that we can all build a better, safer world together. We believe that the future depends on transparency and openness about the strengths and weaknesses of the technology we build.

To that end, we decided to enumerate some general principles that we follow to design systems that are resistant to coercion, compromise, and single points of failure of all kinds, especially adversarial failure. We hope that these principles can be used to start a wider conversation about current best practices for data management and potential areas for improvement at major tech companies.

Ten Principles for User Protection

1. Do not rely on the law to protect systems or users.
2. Prepare policy commentary for quick response to crisis.
3. Only keep the user data that you currently need.
4. Give users full control over their data.
5. Allow pseudonymity and anonymity.
6. Encrypt data in transit and at rest.
7. Invest in cryptographic R&D to replace non-cryptographic systems.
8. Eliminate single points of security failure, even against coercion.
9. Favor open source and enable user freedom.
10. Practice transparency: share best practices, stand for ethics, and report abuse.

1. Do not rely on the law to protect systems or users.

This is the principle from which the others flow. Whether it is foreign hackers, extra-legal entities like organized crime, or the abuse of power in one of the jurisdictions in which you operate, there are plenty of threats outside and beyond the reach of law that can cause harm to your users. It is wise not to assume that the legal structure will keep your users and their data safe from these threats. Only sound engineering and data management practices can do that.

2. Prepare policy commentary for quick response to crisis.

It is common for technologists to take Principle 1 so far that they ignore the law, or at least ignore the political climate in which they operate. It is possible for the law and even for public opinion to turn against technology quickly, especially during a crisis where people do not have time to fully understand the effects of a particular policy on technology.

The technology industry should be prepared to counter bad policy recommendations with coherent arguments as soon as the crisis hits. This means spending time and devoting resources to testing the public's reaction to statements and arguments about policy in focus groups, with lobbyists, and in other demographic testing scenarios, so that we know what arguments will appeal to which audiences ahead of time. It also means having media outlets, talk show hosts, and other influential people ready to back up our position. It is critical to prepare early. When a situation becomes urgent, bad policy often gets implemented quickly, simply because "something must be done".

3. Only keep the user data that you currently need.

Excessive personally identifiable data retention is dangerous to users, especially the marginalized and the oppressed. Data that is retained is data that is at risk of compromise or future misuse. As Maciej Ceglowski suggests in his talk Haunted By Data, "First: Don't collect it. But if you have to collect it, don't store it! If you have to store it, don't keep it!"

With enough thought and the right tools, it is possible to engineer your way out of your ability to provide data about specific users, while still retaining the information that is valuable or essential to conduct your business. Examples of applications of this idea are Differential Privacy, PrivEx, the EFF's CryptoLog, and how Tor collects its user metrics. We will discuss this idea further in Principle 7; the research community is exploring many additional methods that could be supported and deployed.

4. Give users full control over their data.

For sensitive data that must be retained in a way that can be associated with an individual user, the ethical thing to do is to give users full control over that data. Users should have the ability to remove data that is collected about themselves, and this process should be easy. Users should be given interfaces that make it clear what type of data is collected about them and how, and they should be given easy ways to migrate, restrict, or remove this data if they wish.

5. Allow pseudonymity and anonymity.

Even with full control of your data, there are plenty of reasons to use a pseudonym. Real Name policies harm the marginalized, those vulnerable to abuse, and activists working for social change.

Beyond issues with pseudonymity, the ability to anonymously access information via Tor and VPNs must also be protected and preserved. There is a disturbing trend for automated abuse detection systems to harshly penalize shared IP address infrastructure of all kinds, leading to loss of access.

The Tor Project is working with Cloudflare on both cryptographic and engineering-based solutions to enable Tor users to more easily access websites. We invite interested representatives from other tech companies to help us refine and standardize these solutions, and ensure that these solutions will work for them, too.

6. Encrypt data in transit and at rest.

With recent policy changes in both the US and abroad, it is more important than ever to encrypt data in transit, so that it does not end up in the dragnet. This means more than just HTTPS. Even intra-datacenter communications should be protected by IPSec or VPN encryption.

As more of our data is encrypted in transit, requests for stored data will likely rise.
Companies can still be compelled to decrypt data that is encrypted with keys that they control. The only way to keep user data truly safe is to provide ways for users to encrypt that data with keys that only those users control.

7. Invest in cryptographic R&D to replace non-cryptographic systems.

A common argument against cryptographic solutions for privacy is that the loss of either features, usability, ad targeting, or analytics is in opposition to the business case for the product in question. We believe that this is because the funding for cryptography has not been focused on these needs. In the United States, much of the current cryptographic R&D funding comes from the US military. As Phillip Rogaway pointed out in Part 4 of his landmark paper, The Moral Character of Cryptographic Work, this has created a misalignment between what gets funded versus what is needed in the private sector to keep users' personal data safe in a usable way.

It would be a wise investment for companies that handle large amounts of user data to fund research into potential replacement systems that are cryptographically privacy preserving. It may be the case that a company can be both skillful and lucky enough to retain detailed records and avoid a data catastrophe for several years, but we do not believe it is possible to keep a perfect record forever.

The following are some areas that we think should be explored more thoroughly, in some cases with further research, and in other cases with engineering resources for actual implementations: Searchable encryption, Anonymous Credentials, Private Ad Delivery, Private Location Queries, Private Location Sharing, and PIR in general.

8. Eliminate single points of security failure, even against coercion.

Well-designed cryptographic systems are extremely hard to compromise. Typically, the adversary looks for a way around the cryptography by either exploiting other code on the system, or by coercing one of the parties to divulge either key material or decrypted data. These attacks will naturally target the weakest point of the system - that is a single point of security failure where the fewest number of systems need to be compromised, and where the fewest number of people will notice. The proper engineering response is to ensure that multiple layers of security need to be broken for security to fail, and to ensure that security failure is visible and apparent to the largest possible number of people.

Sandboxing, modularization, vulnerability surface reduction, and least privilege are already established as best practices for improving software security. They also eliminate single points of failure. In combination, they force the adversary to compromise multiple hardened components before the system fails. Compiler hardening is another way to eliminate single points of failure in code bases. Even with memory unsafe languages, it is still possible for the compiler to add additional security layers. We believe that compiler hardening could use more attention from companies who contribute to projects like GCC and clang/llvm, so that the entire industry can benefit. In today's world, we all rely on the security of each other's software, sometimes indirectly, in order to do our work.

When security does fail, we want incidents to be publicly visible. Distributed systems and multi-party/multi-key authentication mechanisms are common ways to ensure this visibility. The Tor consensus protocol is a good example of a system that was deliberately designed such that multiple people must be simultaneously compromised or coerced before security will fail. Reproducible builds are another example of this design pattern. While these types of practices are useful when used internally in an organization, this type of design is more effective when it crosses organizational boundaries - so that multiple organizations need to be compromised to break the security of a system - and most effective when it also crosses cultural boundaries and legal jurisdictions.

We are particularly troubled by the trend towards the use of App Stores to distribute security software and security updates. When each user is personally identifiable to the software update system, that system becomes a perfect vector for backdoors. Globally visible audit logs like Google's General Transparency are one possible solution to this problem. Additionally, the anonymous credentials mentioned in Principle 7 provide a way to authenticate the ability to download an app without revealing the identity of the user, which would make it harder to target specific users with malicious updates.

9. Favor open source and enable user freedom.

The Four Software Freedoms are the ability to use, study, share, and improve software.

Open source software that provides these freedoms has many advantages when operating in a hostile environment. It is easier for experts to certify and verify security properties of the software; subtle backdoors are easier to find; and users are free to modify the software to remove any undesired operation.

The most widely accepted argument against backdoors is that they are technically impossible to deploy, because they compromise the security of the system if they are found. A secondary argument is that backdoors can be avoided by the use of alternative systems, or by their removal. Both of these arguments are stronger for open source than for closed source, precisely because of the Four Freedoms.

10. Practice transparency: share best practices, stand for ethics, and report abuse.

Unfortunately, not all software is open source. Even for proprietary software, the mechanisms by which we design our systems in order to prevent harm and abuse should be shared publicly in as much detail as possible, so that best practices can be reviewed and adopted more widely. For example, Apple is doing great work adopting cryptography for many of its products, but without specifications for how they are using techniques like differential privacy or iMessage encryption, it is hard to know what protections they are actually providing, if any.

Still, even when the details of their work are not public, the best engineers deeply believe that protecting their users is an ethical obligation, to the point of being prepared to publicly resign from their jobs rather than cause harm.

But, before we get to the point of resignation, it is important that we do our best to design systems that make abuse either impossible or evident. We should then share those designs, and responsibly report any instances of abuse. When abuse happens, inform affected organizations, and protect the information of individual users who were at risk, but make sure that users and the general public will hear about the issue with little delay.

Please Join Us

Ideally, this post will spark a conversation about best practices for data management and the deployment of cryptography in companies around the world.

We hope to use this conversation to generate a list of specific best practices that the industry is already undertaking, as well as to provide a set of specific recommendations based on these principles for companies with which we're most familiar, and whose products will have the greatest impact on users.

If you have specific suggestions, or would like to highlight the work of companies who are already implementing these principles, please mention them in the comments. If your company is already taking actions that are consistent with these principles, either write about that publicly, or contact me directly. We're interested in highlighting positive examples of specific best practices as well as instances where we can all improve, so that we all can work towards user safety and autonomy.

We would like to thank everyone at the Tor Project and the many members of the surrounding privacy and Internet freedom communities who provided review, editorial guidance, and suggestions for this post.

Tor at the Heart: Flash Proxy

During the month of December, we're highlighting other organizations and projects that rely on Tor, build on Tor, or are accomplishing their missions better because Tor exists. Check out our blog each day to learn about our fellow travelers. And please support the Tor Project! We're at the heart of Internet freedom. Donate today!

Flash Proxy

Sometimes Tor bridge relays can be blocked despite the fact that their addresses are handed out only a few at a time. Flash proxies create many, generally ephemeral bridge IP addresses, with the goal of outpacing a censor's ability to block them. Rather than increasing the number of bridges at static addresses, flash proxies make existing bridges reachable by a larger and changing pool of addresses.

"Flash proxy" is a name that should make you think "quick" and "short-lived." Our implementation uses standard web technologies: JavaScript and WebSocket. (In the long-ago past we used Adobe Flash, but do not any longer.)

Flash Proxy is built into Tor Browser. In fact, any browser that runs JavaScript and has support for WebSockets is a potential proxy available to help censored Internet users.

How It Works

In addition to the Tor client and relay, we provide three new pieces. The Tor client contacts the flash proxy facilitator to advertise that it needs a connection. The facilitator is responsible for keeping track of clients and proxies, and assigning one to another. The flash proxy polls the facilitator for client registrations, then begins a connection to the client when it gets one. The transport plugins on the client and the relay broker the connection between WebSockets and plain TCP.

A sample session may go like this:

1. The client starts Tor and the client transport plugin program (flashproxy-client), and sends a registration to the facilitator using a secure rendezvous. The client transport plugin begins listening for a remote connection.
2. A flash proxy comes online and polls the facilitator.
3. The facilitator returns a client registration, informing the flash proxy where to connect.
4. The proxy makes an outgoing connection to the client, which is received by the client's transport plugin.
5. The proxy makes an outgoing connection to the transport plugin on the Tor relay. The proxy begins sending and receiving data between the client and relay.

From the user's perspective, only a few things change compared to using normal Tor. The user must run the client transport plugin program and use a slightly modified Tor configuration file.

Cupcake

Cupcake is an easy way to distribute Flash Proxy, with the goal of getting as many people to become bridges as possible.

Cupcake can be distributed in two ways:

  • As a Chrome or Firefox add-on (turning your computer into a less temporary proxy)
  • As a module/theme/app on popular web platforms (turning every visitor to your site into a temporary proxy)

Tor at the Heart: GlobaLeaks

During the month of December, we're highlighting other organizations and projects that rely on Tor, build on Tor, or are accomplishing their missions better because Tor exists. Check out our blog each day to learn about our fellow travelers. And please support the Tor Project! We're at the heart of Internet freedom. Donate today!

GlobaLeaks

GlobaLeaks is an open source whistleblowing framework that empowers anyone to easily set up and maintain a whistleblowing platform. GlobaLeaks focuses on portability and accessibility and can help many different types of users—media organizations, activist groups, corporations and public agencies—set up their own submission systems. It is a web application running as a Tor Hidden Service that whistleblowers and journalists can use to anonymously exchange information and documents. Started in 2011 by a group of Italians, the project is now developed by the Hermes Center for Transparency and Digital Human Rights.

One of the main goals of GlobaLeaks is to provide a configurable system to meet the needs of under-resourced groups and activists who are communicating in their native languages. By default the platform enforces a strict data deletion policy, encryption of file content on disk, and routing of all network requests through the Tor Network. But configurability allows implementing organizations to make choices about how they engage in the process. The tool makes it easy to choose what languages to use, how long data is stored on the system, and the questions a source must answer before they create a submission.

To date over 60 organizations in more than 20 languages have used GlobaLeaks to set up whistleblowing systems. Investigative journalists are using it to produce evidence in controversial stories, NGOs and public agencies are using it to better handle their communication with sources, and we have even seen businesses adopt the tool to handle internal corruption reporting.

At the end of 2015 Ecuador Transparente, a GlobaLeaks user, uncovered political manipulation by state organizations. MexicoLeaks has produced award winning journalism while fighting local corruption with the help of the software. You can even see how the Elephant Action League uses the software to combat wildlife crime in the documentary The Ivory Game.

NGOs also use GlobaLeaks to manage the communication process with sources. Organizations like Transparency International Italy and Amnesty International rely on the system to provide a communication channel off email and telephone networks. The PubLeaks project in the Netherlands uses it to provide a contact point for over 42 Dutch media groups.

A project that uses GlobaLeaks has even helped provide the justification for improving legal protection for whistleblowers. The Serbian parliament recently passed a legal framework for whistleblower protection. Pijstrka.rs was acknowledged by the prime minister of Serbia at an anti-corruption conference in Belgrade for its exemplary role in protecting Serbians reporting on corruption.

In all of these contexts, it is crucially important for sources to remain anonymous. Without the work of the Tor Project, the existence of the Tor Network and the larger Tor community, none of this work would be possible.

Going forward, the development of the project is focused on making it easier to install and maintain a node and improving the resilience of the platform to attacks. If you would like to get involved, you can help translate the project, hunt for bounty, author new code, or donate to the project.

Tor at the Heart: Tor Messenger

During the month of December, we're highlighting other organizations and projects that rely on Tor, build on Tor, or are accomplishing their missions better because Tor exists. Check out our blog each day to learn about our fellow travelers. And please support the Tor Project! We're at the heart of Internet freedom. Donate today!





Tor Messenger

Tor Messenger is a cross-platform chat program that aims to be secure by default and sends all of its traffic over Tor. It supports a wide variety of transport networks, including XMPP, IRC, Twitter, and others; enables Off-the-Record (OTR) Messaging automatically; has an easy-to-use graphical user interface; and has a secure automatic updater.

Tor Messenger builds on the networks you are familiar with, so you can continue communicating in a way your contacts are willing and able to do.

It's based on Instantbird, an instant messaging client developed in the Mozilla community, and leverages both the code (Tor Launcher, updater) and in-house expertise that the Tor Project has garnered working on Tor Browser with Firefox.

It was launched in Oct. 2015 and has since been receiving steady security and stability releases. However, there remain a few important items on the short term roadmap,

This summer, the team participated in GSoC, helping to mentor a project implementing CONIKS. CONIKS is a key verification system with the goal of easing the burden of key management for end-users, while at the same time not asking users to trust their communication providers to act in their interest. An alpha release was recently tagged.

At the Tor developers' meeting in Seattle this past September, we held several sessions on messaging. One of the goals was to help determine where to take Tor Messenger in the future. The consensus was that we should be focused on eliminating metadata, both in the currently supported networks (where this might materialize as rosterless communication or having temporary identities), or incorporating new networks with architectures like those found in other onion messaging systems. There are many unsolved problems here, like balancing serverless communication with presence detection and asynchronous messaging, and we're excited to help push the field forward.

Syndicate content Syndicate content