Technology in Hostile States: Ten Principles for User Protection

by mikeperry | December 16, 2016

This blog post is meant to generate a conversation about best practices for using cryptography and privacy by design to improve security and protect user data from well-resourced attackers and oppressive regimes.

The technology industry faces tremendous risks and challenges that it must defend itself against in the coming years. State-sponsored hacking and pressure for backdoors will both increase dramatically, even as soon as early 2017. Faltering diplomacy and faltering trade between the United States and other countries will also endanger the remaining deterrent against large-scale state-sponsored attacks.

Unfortunately, it is also likely that in the United States, current legal mechanisms, such as NSLs and secret FISA warrants, will continue to target the marginalized. This will include immigrants, Muslims, minorities, and even journalists who dare to report unfavorably about the status quo. History is full of examples of surveillance infrastructure being abused for political reasons.

Trust is the currency of the technology industry, and if it evaporates, so will the value of the industry itself. It is wise to get out ahead of this erosion of trust, which has already caused Americans to change online buying habits.

This trust comes from demonstrating the ability to properly handle user data in the face of extraordinary risk. The Tor Project has over a decade of experience managing risk from state and state-sized adversaries in many countries. We want to share this experience with the wider technology community, in the hopes that we can all build a better, safer world together. We believe that the future depends on transparency and openness about the strengths and weaknesses of the technology we build.

To that end, we decided to enumerate some general principles that we follow to design systems that are resistant to coercion, compromise, and single points of failure of all kinds, especially adversarial failure. We hope that these principles can be used to start a wider conversation about current best practices for data management and potential areas for improvement at major tech companies.

Ten Principles for User Protection

1. Do not rely on the law to protect systems or users.
2. Prepare policy commentary for quick response to crisis.
3. Only keep the user data that you currently need.
4. Give users full control over their data.
5. Allow pseudonymity and anonymity.
6. Encrypt data in transit and at rest.
7. Invest in cryptographic R&D to replace non-cryptographic systems.
8. Eliminate single points of security failure, even against coercion.
9. Favor open source and enable user freedom.
10. Practice transparency: share best practices, stand for ethics, and report abuse.

1. Do not rely on the law to protect systems or users.

This is the principle from which the others flow. Whether it is foreign hackers, extra-legal entities like organized crime, or the abuse of power in one of the jurisdictions in which you operate, there are plenty of threats outside and beyond the reach of law that can cause harm to your users. It is wise not to assume that the legal structure will keep your users and their data safe from these threats. Only sound engineering and data management practices can do that.

2. Prepare policy commentary for quick response to crisis.

It is common for technologists to take Principle 1 so far that they ignore the law, or at least ignore the political climate in which they operate. It is possible for the law and even for public opinion to turn against technology quickly, especially during a crisis where people do not have time to fully understand the effects of a particular policy on technology.

The technology industry should be prepared to counter bad policy recommendations with coherent arguments as soon as the crisis hits. This means spending time and devoting resources to testing the public's reaction to statements and arguments about policy in focus groups, with lobbyists, and in other demographic testing scenarios, so that we know what arguments will appeal to which audiences ahead of time. It also means having media outlets, talk show hosts, and other influential people ready to back up our position. It is critical to prepare early. When a situation becomes urgent, bad policy often gets implemented quickly, simply because "something must be done".

3. Only keep the user data that you currently need.

Excessive personally identifiable data retention is dangerous to users, especially the marginalized and the oppressed. Data that is retained is data that is at risk of compromise or future misuse. As Maciej Ceglowski suggests in his talk Haunted By Data, "First: Don't collect it. But if you have to collect it, don't store it! If you have to store it, don't keep it!"

With enough thought and the right tools, it is possible to engineer your way out of your ability to provide data about specific users, while still retaining the information that is valuable or essential to conduct your business. Examples of applications of this idea are Differential Privacy, PrivEx, the EFF's CryptoLog, and how Tor collects its user metrics. We will discuss this idea further in Principle 7; the research community is exploring many additional methods that could be supported and deployed.

4. Give users full control over their data.

For sensitive data that must be retained in a way that can be associated with an individual user, the ethical thing to do is to give users full control over that data. Users should have the ability to remove data that is collected about themselves, and this process should be easy. Users should be given interfaces that make it clear what type of data is collected about them and how, and they should be given easy ways to migrate, restrict, or remove this data if they wish.

5. Allow pseudonymity and anonymity.

Even with full control of your data, there are plenty of reasons to use a pseudonym. Real Name policies harm the marginalized, those vulnerable to abuse, and activists working for social change.

Beyond issues with pseudonymity, the ability to anonymously access information via Tor and VPNs must also be protected and preserved. There is a disturbing trend for automated abuse detection systems to harshly penalize shared IP address infrastructure of all kinds, leading to loss of access.

The Tor Project is working with Cloudflare on both cryptographic and engineering-based solutions to enable Tor users to more easily access websites. We invite interested representatives from other tech companies to help us refine and standardize these solutions, and ensure that these solutions will work for them, too.

6. Encrypt data in transit and at rest.

With recent policy changes in both the US and abroad, it is more important than ever to encrypt data in transit, so that it does not end up in the dragnet. This means more than just HTTPS. Even intra-datacenter communications should be protected by IPSec or VPN encryption.

As more of our data is encrypted in transit, requests for stored data will likely rise.
Companies can still be compelled to decrypt data that is encrypted with keys that they control. The only way to keep user data truly safe is to provide ways for users to encrypt that data with keys that only those users control.

7. Invest in cryptographic R&D to replace non-cryptographic systems.

A common argument against cryptographic solutions for privacy is that the loss of either features, usability, ad targeting, or analytics is in opposition to the business case for the product in question. We believe that this is because the funding for cryptography has not been focused on these needs. In the United States, much of the current cryptographic R&D funding comes from the US military. As Phillip Rogaway pointed out in Part 4 of his landmark paper, The Moral Character of Cryptographic Work, this has created a misalignment between what gets funded versus what is needed in the private sector to keep users' personal data safe in a usable way.

It would be a wise investment for companies that handle large amounts of user data to fund research into potential replacement systems that are cryptographically privacy preserving. It may be the case that a company can be both skillful and lucky enough to retain detailed records and avoid a data catastrophe for several years, but we do not believe it is possible to keep a perfect record forever.

The following are some areas that we think should be explored more thoroughly, in some cases with further research, and in other cases with engineering resources for actual implementations: Searchable encryption, Anonymous Credentials, Private Ad Delivery, Private Location Queries, Private Location Sharing, and PIR in general.

8. Eliminate single points of security failure, even against coercion.

Well-designed cryptographic systems are extremely hard to compromise. Typically, the adversary looks for a way around the cryptography by either exploiting other code on the system, or by coercing one of the parties to divulge either key material or decrypted data. These attacks will naturally target the weakest point of the system - that is a single point of security failure where the fewest number of systems need to be compromised, and where the fewest number of people will notice. The proper engineering response is to ensure that multiple layers of security need to be broken for security to fail, and to ensure that security failure is visible and apparent to the largest possible number of people.

Sandboxing, modularization, vulnerability surface reduction, and least privilege are already established as best practices for improving software security. They also eliminate single points of failure. In combination, they force the adversary to compromise multiple hardened components before the system fails. Compiler hardening is another way to eliminate single points of failure in code bases. Even with memory unsafe languages, it is still possible for the compiler to add additional security layers. We believe that compiler hardening could use more attention from companies who contribute to projects like GCC and clang/llvm, so that the entire industry can benefit. In today's world, we all rely on the security of each other's software, sometimes indirectly, in order to do our work.

When security does fail, we want incidents to be publicly visible. Distributed systems and multi-party/multi-key authentication mechanisms are common ways to ensure this visibility. The Tor consensus protocol is a good example of a system that was deliberately designed such that multiple people must be simultaneously compromised or coerced before security will fail. Reproducible builds are another example of this design pattern. While these types of practices are useful when used internally in an organization, this type of design is more effective when it crosses organizational boundaries - so that multiple organizations need to be compromised to break the security of a system - and most effective when it also crosses cultural boundaries and legal jurisdictions.

We are particularly troubled by the trend towards the use of App Stores to distribute security software and security updates. When each user is personally identifiable to the software update system, that system becomes a perfect vector for backdoors. Globally visible audit logs like Google's General Transparency are one possible solution to this problem. Additionally, the anonymous credentials mentioned in Principle 7 provide a way to authenticate the ability to download an app without revealing the identity of the user, which would make it harder to target specific users with malicious updates.

9. Favor open source and enable user freedom.

The Four Software Freedoms are the ability to use, study, share, and improve software.

Open source software that provides these freedoms has many advantages when operating in a hostile environment. It is easier for experts to certify and verify security properties of the software; subtle backdoors are easier to find; and users are free to modify the software to remove any undesired operation.

The most widely accepted argument against backdoors is that they are technically impossible to deploy, because they compromise the security of the system if they are found. A secondary argument is that backdoors can be avoided by the use of alternative systems, or by their removal. Both of these arguments are stronger for open source than for closed source, precisely because of the Four Freedoms.

10. Practice transparency: share best practices, stand for ethics, and report abuse.

Unfortunately, not all software is open source. Even for proprietary software, the mechanisms by which we design our systems in order to prevent harm and abuse should be shared publicly in as much detail as possible, so that best practices can be reviewed and adopted more widely. For example, Apple is doing great work adopting cryptography for many of its products, but without specifications for how they are using techniques like differential privacy or iMessage encryption, it is hard to know what protections they are actually providing, if any.

Still, even when the details of their work are not public, the best engineers deeply believe that protecting their users is an ethical obligation, to the point of being prepared to publicly resign from their jobs rather than cause harm.

But, before we get to the point of resignation, it is important that we do our best to design systems that make abuse either impossible or evident. We should then share those designs, and responsibly report any instances of abuse. When abuse happens, inform affected organizations, and protect the information of individual users who were at risk, but make sure that users and the general public will hear about the issue with little delay.

Please Join Us

Ideally, this post will spark a conversation about best practices for data management and the deployment of cryptography in companies around the world.

We hope to use this conversation to generate a list of specific best practices that the industry is already undertaking, as well as to provide a set of specific recommendations based on these principles for companies with which we're most familiar, and whose products will have the greatest impact on users.

If you have specific suggestions, or would like to highlight the work of companies who are already implementing these principles, please mention them in the comments. If your company is already taking actions that are consistent with these principles, either write about that publicly, or contact me directly. We're interested in highlighting positive examples of specific best practices as well as instances where we can all improve, so that we all can work towards user safety and autonomy.

We would like to thank everyone at the Tor Project and the many members of the surrounding privacy and Internet freedom communities who provided review, editorial guidance, and suggestions for this post.

Comments

Please note that the comment area below has been archived.

December 16, 2016

Permalink

This is really excellent. Thanks, Mike. It's worth keeping in mind for all technology projects (even research code). (-Rachel)

Assuming you are who I think you are, as a user I am not authorized to invite you to blog here on the dangers to Tor users posed by stylometric de-anonymization attacks, but I sure wish someone who is, would. And if you are given the opportunity, I hope you will not decline to address the troubling issue of problematic funding. (DARPA apparently told you they wanted to protect anonymous speech, but they told an Army Times reporter that they want to reveal all the anons. So their story changed dramatically, and that's worrisome).

December 16, 2016

Permalink

This is an excellent writeup. I like that there are links about what exactly to do, and specific political reasons why to do it. The only puzzle piece missing is how the company or organization effecting these steps is to directly benefit (if it is to benefit at all). Those of us who frequent this blog clearly understand the necessity of all of these principles for the greater good, but wthout a selling point, I think it'll be difficult getting any of these ideas through upper management.

I don't really know where to start on that. Maybe some data indicating that customers are willing to choose privacy over convenience? That the damages of being compromised or leaked exceed the costs of securing the data in the first place?

"Why does our company need this?" That's the big question.

A somewhat dated talk, "The State of Incident Response" by Bruce Schneier, explores this question from a from the consumer's and company's perspectives, in terms of security. Suffice it to say, said state is far from where we the security community would like it to be.

> I don't really know where to start on that. Maybe some data indicating that customers are willing to choose privacy over convenience?

There might be some data suggesting that some people want privacy but are failing to obtain it, but by the very nature of data collection, there shouldn't be very much on people who want privacy and succeed in obtaining it.

Similarly, it is possible that one reason there is not a vibrant consumer privacy industry in the USA is that people who desire privacy are reluctant to do the privacy-compromising things which companies demand in order for customers to provide feedback. (I believe that in her book Dragnet Nation, Julia Angwin makes the same point in other words.)

December 17, 2016

Permalink

Stealth Stealth and a little more Stealth, would be nice thanks , can't see it can't catch it -_-

One of the interesting points mentioned in connection with the many many *many* news stories on the intrusion by unknown (possibly Russian state-sponsored) actors into the DNC/DCCC is that steganography played a role in hiding the attack.

So why should the bad guys be the only ones who can use almost impossible to detect steganography? Why isn't there any steganography application in the Debian repository which works well with innocuous images which do *not* contain metadata? E.g. unique and pretty computer generated fractal images, unique computer generated images of the charming Simpson family, pretty pictures of cross word puzzles, semi-unique chess board state of play images, semi-unique sudoku puzzles, all kinds of innocuous looking things could be used to help ordinary people communicate using encrypted steganography. Why are there not more and better steganographic utilities suitable for encrypted laptops or phones?

The speculative but plausible answer: NSA is preventing them from becoming available to the general public.

Maybe Tor Project should also have a Ministry of Games, which tries to invent addictive but harmless games which billions of ordinary people will want to play, which involve an exchange of images which are suitable for steganography.

Yes, our enemies try very hard to persuade us that "NSA loves GPG, because it helps them know what to attack", implying that we would be foolish to use strong cryptography.
But of course we would be insane not to use strong cryptography. The solution is not less cryptography, but more and better cryptography, coupled with more and better stealth modes for sharing encrypted data and passing encrypted messages.

> pretty pictures of cross word puzzles, semi-unique chess board state of play images, semi-unique sudoku puzzles, all kinds of innocuous looking things could be used to help ordinary people communicate using encrypted steganography.

Here is an interesting historical precedent:

http://www.chabad.org/holidays/chanukah/article_cdo/aid/597253/jewish/D…

>> Jewish children resorted to learning Torah in outlying areas and forests. Even this plan was not foolproof, for the enemy had many patrols. The children therefore brought along small tops that they would quickly pull out and play with after secreting away their texts, so that they could pretend to be merely playing games.

December 17, 2016

Permalink

One of the most important documents in history that explains why "they" are obsessed with your data: Silent Weapons for Quiet Wars

December 17, 2016

Permalink

Special attention to our tech friends in the UK, who just passed one of the most authoritarian rules against digital rights.

December 17, 2016

Permalink

VERY interesting.

BUT there are States/Counties that don't respect their own laws, e.g. U.S.A. but also others.

December 17, 2016

Permalink

One I thought of that wasn't mentioned is: don't outsource infrastructure unless you keep control of the data or crypto keys. This one is huge right now. Everyone is outsourcing everything from email to analytics.

Not to name names, but Google is perhaps the biggest enabler of this behavior. With Enterprise GMail (or whatever term they use for it), emails you send from your private account to someone else's seeming private @example.com address might actually be destined for Google's back end. You probably won't realize it's happening, and there's usually nothing you can do to change it. If you're just sending email from your own account, you could create a temporary email address over Tor and use PGP to encrypt the content (good luck), but if we're talking about your company or school requiring you to sign into their outsourced enterprise email account, you're SOL. It could be your doctor's office, university, employer, insurance company, or anyone else doing this, and is it really possible to shop around for one that has their own private infrastructure?

Google Analytics is probably the most widespread example of this. Millions, maybe billions, who knows how many sites are constantly sending data about their visitors to Google, simply so they can improve their marketing tactics. I don't know if e.g. Piwik lacks features, or is simply too difficult for most companies to use, but the fact is that private analytics products do not even make a dent in Google's market share. Unlike the GMail example, using Tor can help protect you from this, but it is still a major problem for the web at large.

Why? It is much cheaper and more convenient for organizations to outsource to Google, Amazon, Facebook, PayPal, Sales Force, even blockchain.info, than to run their own infrastructure. There's an agency out there willing to outsource any kind of data you can think of. And if it weren't for the privacy aspect, I would say leaving it to the experts is a very sound business practice. As long as privacy doesn't sell, the future is bleak.

What happens in the browser is just the tip of the iceberg. I was focusing more on the general practice of companies using or providing services they don't have control over, essentially handing out their customers' data to third parties.

You make some important points here!

One minor quibble: I don't think anyone knows whether privacy will sell, because it seems that no retail outlet in the US is attempting to put privacy-enhancing products on the shelves or encouraging customers to pay with cash (or the more anonymous varieties of electronic currency).

We desperately need a privacy industry which sells mass-produced consumer items which concerned citizens can use to check up on what their smart phone is doing, to really truly improve the security of their home router, to counter the growing abuse of extraneous electronic emanations to spy on video screens and laser printers, to move the world towards strictly text-based email which nukes all those dangerous and possibly executable "attachments", etc., etc.

How do we get said privacy industry? Tor Project's media team can help by pushing reporters to consider asking aloud why consumer items appear to be insecure by design.

We know one reason why: NSA tries to "shape" the consumer environment in order to better spy on us all. But this may not be a complete explanation, since one would think that with all those eager would be entrepreneurs out there, plenty of people would love to found a rapidly growing business which provides, for example, RF frequency power spectrum analyzers with DF capabilities tied to GPS mapping (independent of Google Maps, which is used by NSA so citizens should avoid it).

December 17, 2016

Permalink

Re : preparing crisis response, praytell, WHERE is the Torproject's "canary" ? Or is the fact that there is none a sign that the Torproject has already been compromised, maybe from the very first day ? Just asking...

The short answer is that I don't think security canaries do anything.

We've told you that there's no backdoor, and that we aren't going to put one in, no matter what.

We don't need to use any legal tricks (that probably don't actually work in a legal sense) to try to hint to you when we've secretly changed our plan. We won't change it.

No backdoors, ever, no matter what.

https://www.torproject.org/docs/faq#Backdoor

December 17, 2016

In reply to arma

Permalink

In other words, Tor Project's canary is Tor Project itself.

"Tor Project's canary is Tor Project itself" is a sophism. The project - any project - can't be it's own canary, since the person or entity whoever takes ownership would also control the canary, ipso facto.

Frankly, it shouldn't be so difficult for the briliant lot of employees and collaboraTors to think and devise some clever, robust canarian mechanism that would be all but infrangible (meaning unbreakable).
If the team is so busy with other important tasks - like ordering and distributing teeshirts, or edicting codes of correct gender interaction - you could trust a summer intern with the canary research.

Can you describe a scenario where that argument actually makes a practical difference? For all intents and purposes, Tor Project shutting down its own website or ceasing to release updates is functionally the same as failing to update its canary if it had one. Unless there is some specific legal distinction between the two.

If you're talking about an adversary taking control of Tor Project's infrastructure and issuing an NSL, then PGP signatures already in use would prevent a new version from being released, resulting in effectively the same scenario as aforementioned.

If you're talking about an adversary successfully compelling Tor Project to release a backdoored version, said adversary would also compel a canary update.

I'm not trying to get into a metaphysical debate about it or anything, just wondering if I'm missing something.

"I'm not trying to get into a metaphysical debate about it or anything, just wondering if I'm missing something" : well, maybe. What I put under the moniker "canary" would be a means for the team to advertise the clean status (as in : non compromised) of the Tor project, the sites, servers, software distro... in an unambiguous manner and that an adversary seizing the project could not easily fake even with the help of part of the project's team and management; I'm not clear myself /how/ it should be done.It could be, as I hinted above, a research subject of its own. Probably the "canary" needs to be hosted in a privacy friendly country and away from the main hosting servers, distributed or replicated on several hosts, on clearnet as well as onions, cryptographically signed with multiple keys of long standing project members... I think something could be devised - by people cleverer than I - unless, maybe it's too late at this stage to "bootstrap" confidence. A research subject IMHO.

I think the argument here is not whether canaries provide an immutable protection against subversion by our enemies, but whether they might provide a little extra assurance.

My own view is that in the foreseeable future, we are not likely to find *anything* which is truly unbreakable either technologically, politically, or legally, and that from this it follows that we should adopt "defense in depth". Come to think of it, "defense in depth" is another candidate for an Eleventh Principle to be added to those set out by Mike Perry above.

December 18, 2016

In reply to arma

Permalink

> No backdoors, ever, no matter what.

I can never hear that often enough, so thanks.

But you may be overlooking the fact that "backdoor" might mean different things to different people. Even more insidious, the private definition you have in mind might change under pressure from a government or other source.

Off the top of my head, here is a random selection of phenomena which might be considered backdoors:

o quietly inserting malicious code into software published by the Project,

o surreptitiously changing parameters of key utilities such as a PRNG in order to greatly reduce its functionality, in order to enable NSA to spy easily and undetectably,

o looking the other way when NSA/FBI/whomever injects malicious code into torproject pages or software in transit to users,

o making it too hard for users to anonymously report such experiences in order to alert developers,

o allowing (knowingly or should-have-knowingly or unknowingly) NSA or other agencies or militaries of the US or other governments to significantly infuence critical design decisions in TP software, or political strategizing by TP executives, or simply "getting too close" to NSA/DARPA in some way,

o censoring comments which come close to uncovering some secret USIC "backdooring" project, in order to avoid embarrassing a government which to date has provided most of the Project's fundings (directly or through its corporate proxies such as SRI),

o censoring comments which try to broaden the term "backdoor" to include well-established de-anonymization techniques which so far Tor Project does not even attempt to circumvent, such as stylometry,

o hiring (knowingly or should-have-knowingly or unknowingly) a CIA agent as a Tor Project developer,

o deleting from this blog an anonymous comment pointing out the existence of any of the above, with links to publicly available evidence supporting the claims.

Many projects (including Tor) related to security and privacy are or have been financed by the US government/ senate/ congress using third party fronts (CIA and others, see: https://pando.com/2015/03/01/internet-privacy-funded-by-spooks-a-brief-… ).
Why? From the projects point of view, they need money to live and made things done... many would prefer just donations, but only people really involved know how difficult is to get donations from real people not related to the government in any way.
From the some parts of government they see this kind of technology as necessary for their own uses and political objectives around the world (including incentive people in some country's to see the rest of the world in the hope they will make what is needed to become themselves free... and that is good for USA business... more markets to sell/ trade things).

Speaking of BBG (Broadcasting Board of Governors), a US "soft propaganda" entity (neither a government agency, nor a private company, nor an independent NGO) which in the past provided much of Tor Project's funding:

"Liberal" US media organizations are expressing concern about major changes in how BBC operates which were snuck into the last-minute version of the NDAA (the military spending authorization by the US Federal Congress). The critics fear that the changes will make it easy for the next administration to turn BBG into a "hard propaganda" agency which targets the US population itself with disinformation and misinformation, similar to state-sponsored news agencies in countries such as China or Russia.

Some commentators in this blog have remarked that in decreasing order of subtlety, China > USA > Russia in terms of how the government influences or controls national media in "friendly" or "adversary" nations. That is, Russia doesn't try very hard to hide its often enormous and easily spotted "effects" operations, but China is much more successful because instead of targeting the alt-right fringe with "fake news", they have been targeting the American business community with carefully "slanted news". The USA is somewhere in between. RU has been brilliantly successful in the US only because the established parties both suffer from an almost uniquely American phenomenon: a strange kind of inalienable political arrogance which is deaf to all wakeup calls and blind to all warning signs. Both parties were warned--- significantly, not by FBI but by ordinary citizens--- that Podesta was being hacked to death and that the RNC's turn might come next, but disregarded the warnings.

For example, in the allegations that state-sponsored RU hackers decided the outcome of the recent US elections which are currently dominating headlines, no US media organizations (other than Truthdig and Truth-Out) appear to have yet pointed out that over the past 60 years, CIA has intervened (mostly unsuccessfully) in a good proportion of elections around the world, e.g. by making huge contributions to their favored candidates, by bribing election officials, or simply by assassinating opposition politicians or terrorizing their supporters. (See the carefully researched and very readable book by Tim Weiner, Legacy of Ashes. for details.) So to some extent, while it is not established whether or not Putin shares in the responsibility for the election of Donald J. Trump, it is certain that CIA is very, very blameworthy.

To be sure, other nations have engaged in similar covert actions; the US media has not hesitated to (truthfully) report on RU intervention in elections held in former Soviet states. But this doesn't alter the fact that a nation which long espoused pro-democracy principles should never have been involved in covert operations in the first place. If only because a little foresight would have revealed that what goes around, comes around.

> Many ... things).
Donations in the usa like in many other countries from foundations, associations, club, government, investment funds etc. are a duty & an ethical obligation & a benefit for their taxes as clients and investors ... the reality is far of your are thinking about that = pragmatism & legal way.
It is nothing to do with the usa as nation:nsa or business and it is not a necessity.
USA is involved (in fact all the nato & uno) because it is a historic wish/responsibility but ... they can suddenly change their mind & operate an another policy or destroy that they built ... it is also the reason why Tor like another project need your support & your help -donations are welcome.

December 18, 2016

Permalink

It is not only the US that doesn't respect its laws. The BRD/FRG doesn't respect the law as well. The BRD/FRG has never been Germany. It is difficult to respect laws in this country and don't get harassed. Criminals may get caught, but our treacherous courts will set them free. Doesn't matter if it's murder or simple thievery. But murder doesn't lapse. Not even our borders are respected and protected anymore. Germany is getting flooded with millions of war-criminals and our politicians and parts of the population are the facilitators. Women, men and children alike are harassed and murdered in the hundreds. The police is frustrated because politicians don't help them. They even command them not to investigate.

Even technology is turned against us. Our health card is a dictatorial piece of junk, but corrupt doctors and their employees are junk as well. Righteous patients get abused like garbage. The rest are only brainless sheep to be used at will.

But these problems are about to be solved. The traitors to our nation will get what they deserve, rest assured. And the war-criminals that have invaded our land will be eradicated.

There will be true justice one day. We will get our vengeance.

December 18, 2016

Permalink

The "Tor at the Heart" series is so awesome! I am always impressed by the thoughtfulness of Mike's posts, and this post may be his most important so far. I hope all readers will try to donate to the Project!

I strongly agree with the stated principles, and am very glad to see both Shari and Mike clearly explaining in recent posts that technology does not exist in a vacuum, and that Tor Project must think in terms of political strategy as well as technical countermeasures. Both are essential, and as more pro-democracy institutions recognize this, we will be in a much stronger position to resist the current global rush toward autocracy.

I am very happy that Shari has made funding diversity a priority for the Tor Project, and the entire "Tor at the Heart" series is big step toward that goal, one which I hope will prove a huge success, but I don't see any explicit reference in the ten principles to the critical need to avoid being too easily manipulated by the USG, or any other government, or indeed by any corporation. (Microsoft springs to mind--- it seems they are quietly getting more and more involved with the Linux kernel and this does not bode well for the future security of Linux in general.)

A more difficult point which also is not mentioned: on the one hand, Tor Project should think long and hard before accepting any "advice" or funding offered from USIC or DARPA or their contractors/allies, or similar agencies/companies from other nations, and should be particularly careful to avoid hiring anyone who might later turn out to be a mole for American or Russian or any other intelligence services. NIST may be able to offer some advice on how to formulate a policy which essentially says "think very hard before you even allow USIC onto the premises, much less invite them to sit on a committee, much less allow them to influence political or technical decisions".

December 18, 2016

Permalink

> 7. Invest in cryptographic R&D to replace non-cryptographic systems.

Critically important, but maybe a little too narrow.

In the past, IMO, Tor Project suffered from too narrowly defining the problem of providing anonymity, for the convenience of a small NGO with a tiny budget, but to the great detriment of the many endangered Tor users around the world.

I see some signs this unfortunate attitude is changing; in particular, Mike Perry has been in the vanguard in challenging the notion that Tor cannot hope to provide any meaningful protection under any circumstances against such vastly powerful adversaries as NSA. Further reason for cautious optimism comes from the Snowden leaks.

Currently, one major (?) de-anonymization threat facing Tor users which the Tor Project is not (yet) publicly talking about is stylometric attacks.

Might I suggest that this too might be a suitable topic for future research? It seems to me that some graduate student somewhere could certainly modify the gedit spellcheck (for example) to autosuggest upon request more common synonyms for words used in a draft post, in order to minimize entropy loss due to vocabulary statistics. Not much harder, perhaps: minimizing entropy loss due to punctuation style statistics. Much more challenging: to minimize additional entropy loss resulting from grammar/syntax statistics.

December 18, 2016

Permalink

> 6. Encrypt data in transit and at rest.

Suggest adding the desirability of "future proofing".

Just as Forward Secrecy can thwart decryption of TLS if an adversary gains knowledge of a unique website TLS key used to encrypt all transactions, so to Tor developers should bear in mind that ideally we want to protect users not just for a year, but for their entire lifetimes.

We know from the Snowden leaks that NSA is attempting to "archive" *all* TLS encrypted datastreams between any two parties, anytime, anywhere in the world, and hope to decrypt these as soon as their cryptanalysis is sufficiently advanced, or their brute forcing technology sufficiently brutal. More precisely, sources say they do not currently hope to decrypt *all* TLS encrypted datastreams at some later date, just the ones they guess might hold remaining "interest" at that later date. It is often assumed that NSA is the only agency with such vast (and enormously dangerous) ambitions, but I caution that this is not likely to be true.

But in any case, ideally we want Tor data streams which occur next month to be "quantum safe" several generations hence.

In addition to the obvious technological rationale for "future proofing", there is an important political rationale: China has announced they are phasing in the use of Big Data methods to continually compute and recompute "citizenship scores" which will limit the access of dissidents to jobs, courts, education, travel opportunity, housing, bottled oxygen, and food.

The USA is doing the same thing, in secret, under cover of FBI/NCTC CVE (Countering Violent Extremism) programs which are also effectively preparing to implement continually recomputed citizenship scores for use by FBI's "Shared Responsibility Committees". These scores will draw on Big Data repositories holding information gathered over the entire lifetime of each individual, but will not be bound by the ancient legal principle that one cannot be charged in future for doing something today which is not defined or even commonly viewed as criminal activity today. Thus we must expect that millions hapless CN/US citizens will find themselves penalized in the future for blog posts (for example) they make today, which would not be viewed *today* as objectionable to their government, but which twenty years hence might be viewed with extreme prejudice.

In countries which are not regarded as sufficiently resourceful to implement real time citizenship scoring based on Big Data repositories, regime changes can have similar effects. If you blog today your support for the current regime, you might land in terrible trouble if that regime is replaced next year by another with very different ideology.

December 18, 2016

Permalink

> 10. Practice transparency: share best practices, stand for ethics, and report abuse.

Also: one of the things I like most about the Tor Messenger team is that in answering comments to their occasional posts announcing a new version (here's hoping they emerge from beta Real Soon Now!), they have cogently explained some design decisions. This is very helpful even to readers who lack the background needed to understand the fine points because it provides evidence they are thinking hard about these decisions.

Another candidate for an eleventh Principle might be: Persuade people to actually use our software. In particular, if the Tor Media team can persuade some influential reporters to regularly use TM, and to use it at places like Calyx where potential sources can try to reach them, there is some hope that eventually other reporters (smart, young, ambitious?) will follow suit. That would be enormously beneficial for journalism (and ultimately, for enhancing public trust in mass media).

December 18, 2016

Permalink

2. Prepare policy commentary for quick response to crisis.

Yes, and more generally, prepare contingency plans for anticipated emergencies, such as

o a coordinated global raid which attempts to seize every Directory Authority,

o an unexpectedly effective cyberattack which disables a large fraction of Tor nodes, owing to some undiscovered but patchable vulnerability,

o seizure/theft of Tor Project funds held in US or overseas banks,

o seizure/theft of other critical technical assets, or the death/incapacitation of a key employee,

o discovery of another mole inside the Tor Project,

o doxing, burglaries, beatings, assassinations or other terror tactics employed by unknown actors against employees and volunteers in the US and/or overseas,

o raid of Tor Project facilities in Cambridge, with seizure of all computers and files, or less dramatically but having the same effect, finding one morning that you have simply been locked out of your offices with no explanation (the Russian government often uses this tactic against human rights groups, and there is good reason to think the new US administration will adopt such Putinesque tactics, after trial use targeting entities such as Tor Project to which the US mass media is already quite hostile),

o unexpected overnight action by the US Congress resulting in a badly written law which effectively declares Tor Project to be an "illegal" organization under US law.

Clearly you cannot prepare for everything, but there are a few things which are sufficiently likely that you should have technical, legal, and political playbooks ready to hand should you encounter these predictable emergencies.

December 20, 2016

Permalink

I'd like to urge Tor Project to encourage anyone with marketable coding/STEN skills to boycott the Surveillance State, or even the Trump administration. If the funding drive is a huge success, as I hope, perhaps Tor Project can take out some carefully crafted well placed advertisements, like this:

https://www.eff.org/press/releases/eff-ad-wired-tech-community-must-sec…
EFF Ad in Wired: Tech Community Must Secure Networks Against Trump Administration
President-Elect Threatens Free and Open Internet
20 Dec 2016

> San Francisco - In a full-page advertisement in Wired magazine, the Electronic Frontier Foundation (EFF) has a warning for the technology community: “Your threat model just changed.” EFF’s open letter calls on technologists to secure computer networks against overreaches by the upcoming Trump administration and to protect a free, secure, and open Internet. The January issue of Wired with EFF’s open letter on page 63 hit newsstands today. “Our goal is to rally everyone who makes digital tools and services to this important cause: protect your technology and networks from censorship and government surveillance,” said EFF Activism Director Rainey Reitman. “The Internet was created to connect and empower people around the world. We cannot let it be conscripted into a tool of oppression. But if we are going to protect the Internet, we need a lot of help. Wired has been looking to the technological future for over two decades, and its readers have the skills we need.”

December 20, 2016

Permalink

Given the pivotal role of the US on the internet, this should interest everyone everywhere:

https://www.eff.org/deeplinks/2016/12/trump-and-his-advisors-surveillan…
Trump and His Advisors on Surveillance, Encryption, Cybersecurity
Kate Tummarello
19 Dec 2016

> Where will the incoming Trump administration come down on issues like surveillance, encryption, and cybersecurity? While it is impossible to know the future, we have collected everything we could find about the stated positions of Trump and those likely to be in his administration on these crucial digital privacy issues. If you are aware of any additional statements that we have not included, please email kate@eff.org with a link to your source material, and we will consider it for inclusion.

December 20, 2016

Permalink

Some Tor users have urged Tor Project to help create a massive drive encouraging people with marketable coding/engineering skills to boycott the Surveillance State, or even the Trump administration.

Lest anyone doubt that this can be effective, after an employee revolt, IBM has apparently reversed course by joining a number of the other largest techcos in vowing that it will not help DHS create the Trump proposed Muslin registry:

https://theintercept.com/2016/12/19/ibm-employees-launch-petition-prote…
IBM Employees Launch Petition Protesting Cooperation with Donald Trump
Sam Biddle
19 Dec 2016

> IBM employees are taking a public stand following a personal pitch to Donald Trump from CEO Ginni Rometty and the company’s initial refusal to rule out participating in the creation of a national Muslim registry.

"Some Tor users", like you, two comments up? :)

I really like a guideline I learned from Gunner, the nice person who volunteers to facilitate parts of the Tor meetings: "in a group of n people, speak at most 1/n of the time."

That advice is especially important in an anonymous blog comment context, where it's not clear how many people there are, so it's easy to accidentally look like a Sybil attacker just because you have a lot to say.

In summary, if at most 1/n of the words in the comments section were yours, I think we would all be more cheerful publishing your comments. :) Thanks!

December 21, 2016

In reply to arma

Permalink

Keep your shirt on, Roger!

1. I often see long and thoughtful comments in this blog which I certainly did not write, so the "Tellerheimer phenomenon" you cite may not be so serious as you think,

2. I am one (and certainly not the only one!) of the users who for years urged Tor Project to diversify funding, something which I feel very strongly is essential for the Project's survival, and also to think in terms of political strategies as well as technical decisions, and I have been delighted that Shari has adopted these core principles; the "Tor at the Heart" series is a major step toward cementing them as guiding all TP activities in the years ahead, so it is natural that I should wish to see the funding drive succeed,

3. I certainly hope you are attempting to encourage more readers to offer more frequent, well-expressed, and thoughtful comments, rather than to discourage any one reader from doing those things.

December 27, 2016

Permalink

>3. Only keep the user data that you currently need.

In some nation-states companies are REQUIRED BY LAW TO COLLECT AND STORE THE DATA TO PROVIDE LAW ENFORCEMENT WITH THEM FOR ANALYSIS WITH MACHINE LEARNING.

Too true, alas, and the US is to some extent one of them.

But these are political battles which can be fought. In the mean time, we can hope that some providers will relocate to nations which do not (yet) have any such mandatory government requirements.