Tor Messenger 0.1.0b5 is released

We are pleased to announce another public beta release of Tor Messenger. This release features important security updates to libotr, and addresses a number of stability and usability issues. All users are highly encouraged to upgrade.

The initial public release was a success in that it garnered a lot of useful feedback. We tried to respond to all your concerns in the comments of the blog post but also collected and aggregated a FAQ of the most common questions.

OTR over Twitter DMs

Tor Messenger now supports OTR conversations over Twitter DMs (direct messages). Simply configure your Twitter account with Tor Messenger and add the Twitter account you want as a contact. Any (direct) message you send to another Twitter contact will be sent over OTR provided that both contacts are running Tor Messenger (or another client that supports Twitter DMs and OTR).

Facebook support dropped

Facebook has long officially deprecated their XMPP gateway, and it doesn't appear to work anymore. We had multiple reports from users about this issue and decided that it was best to remove support for Facebook from Tor Messenger.

We hear that an implementation of the new mqtt based protocol is in the works, so we hope to restore this functionality in the future.

Before upgrading, back up your OTR keys

Before upgrading to the new release, you will need to back up your OTR keys or simply generate new ones. Please see the following steps to back them up.

In the future, we plan to port Tor Browser's updater patches (#14388) so that keeping Tor Messenger up to date is seamless and automatic. We also plan to add a UI to make importing OTR keys and accounts from Pidgin, and other clients, as easy as possible (#16526).

The secure updater will likely be a part of the next release of Tor Messenger.

Downloads

Please note that Tor Messenger is still in beta. The purpose of this release is to help test the application and provide feedback. At-risk users should not depend on it for their privacy and safety.

Linux (32-bit)

Linux (64-bit)

Windows

OS X (Mac)

sha256sums.txt
sha256sums.txt.asc

The sha256sums.txt file containing hashes of the bundles is signed with the key 0x6887935AB297B391 (fingerprint: 3A0B 3D84 3708 9613 6B84 5E82 6887 935A B297 B391).

Changelog

Here is the complete changelog since v0.1.0b4:

Tor Messenger 0.1.0b5 -- March 09, 2016

  • All Platforms
    • Bug 13795: Remove SPI root certificate because Debian no longer ships it
    • Bug 18094: Remove references to torbutton from start-tor-messenger script
    • Bug 18235: Disable Facebook as they no longer support XMPP
    • Bug 17494: Better error reporting for failed outgoing messages
    • Bug 17749: Show version information in the "About" window
    • Bug 13312: Add support for OTR over Twitter DMs
    • Bump libotr to 4.1.1
  • Mac
    • Bug 17896: Add Edit menu to the conversation window on OS X
  • Windows
    • ctypes-otr
      • GH 65: Support Unicode paths on Windows
Anonymous

March 09, 2016

Permalink

Grading system of the level of the trust for all nodes. And best node's auto & manually selectable in the chain.

Anonymous

March 09, 2016

Permalink

:C

couldn't tor messenger be made to use a javascript otr implementation instead of libotr?

Yes, it could and that's something we've considered (and are still considering). We went with libotr because it's correct and constant time and audited, but these memory safety issues are indeed troubling.

Even though I don't yet really understand all of the design decisions you explain briefly, I think one of the most promising things about TM is the fact that you are trying to explain design decisions in response to queries from technically able users. That's very important because it helps less able users to trust that you are thinking hard about all these things. I think most less able users (like me) are at least capable of understanding that everything is tradeoff, and that all we can reasonably ask is that you make careful choices and continually reexamine them. (I hope TM will be able to remain sufficiently nimble to undo a bad decision in the event of some possible future revelation in BlackHat etc. which changes expert opinion on the relative hazard of various different vulnerabilities affecting TM's reverse dependencies.)

I'd love to see an Ars article by one of their intrepid journalists which tries to explain the design decisions you have defended in more detail. Even better if Tor Project can get a sizable fraction of journalists who write about tech to help us all beta test TM.

We can certainly do better and one of the things we want to get across is who can see what in a typical Tor Messenger conversation. Can your ISP or the server see what you are talking about (no)? Can the server see who you are talking to (yes)? We need to get this information across to users, in a simple "yes" "no" "maybe" tabular format. We are tracking this in https://trac.torproject.org/projects/tor/ticket/17528.

Anonymous

March 09, 2016

Permalink

What's changed regarding the IRC client? Because it doesn't work anymore. Now I get 'Error: Peer's Certificate issuer is not recognized'. Never saw that before. I then add a permanent security exception, but it still doesn't connect, just says 'Lost connection' and then after a while I get again 'Error: Peer's Certificate issuer is not recognized'. I'm using the irc.oftc.net network

As the changelog says, "Bug 13795: Remove SPI root certificate because Debian no longer ships it". The OFTC certificates are signed by the SPI certificate, which we were including earlier because Debian also used to include it, but now no longer does (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=796208). When we discussed adding certificates to our builds (see https://lists.torproject.org/pipermail/tbb-dev/2014-November/000181.html), we decided that we are comfortable adding the SPI certificate as Debian packages it as part of the ca-certificates package. Since they no longer consider it "safe", we decided to remove it as well.

OFTC has been known to block Tor but it seems like the situation has improved recently. So the "lost connection" messages are most likely OFTC blocking connections from your specific exit.

Does tor-messenger actually store the security exceptions? I've been clicking away at permanently store security exceptions for I don't know how many times, but they keep coming back.

My subjective feeling is that is much harder to connect to a irc network with this version than the previous one.

Yes, it should (and does in my limited testing) store the exception. Can you clarify which platform you're on?

This change would only affect OTFC (well, and other networks with SPI issued certs, which I assume is likely few) but I accept your premise that this was a regression in usability.

What platform are you on? I've noticed that that can be a little finicky as well (worth filing a bug) but it should eventually work.

Tor Messenger 0.1.0b5 is released and Tor Messenger 0.1.0b4 is released crash when I try to open a menu ( File, Tools or Help ) on kubuntu 14.04.4 LTS.
That before to create an account, so accounts configuration window is closed by me.
I think it is a bug related on gtk applications theme on kde environment.

1. This release was to patch the known libotr vulnerability.
2. https://trac.torproject.org/projects/tor/ticket/17023

Something to consider for upcoming releases. Tor users can now be fingerprinted with their unique mouse movements.

https://www.rt.com/viral/335112-tor-mouse-movements-fingerprint/ is something to consider for your next release

1 Lol, I can't. I disabled JS. :)
2 I still can. I have not disabled CSS. :(

http://zgfgvob256pffy62.onion/ is it tor official web ? Can I download from there ?
Is it support window7/ 10 64bit ?

That looks like a mirror of the website,
https://lists.torproject.org/pipermail/tor-mirrors/2014-August/000673.h…
https://www.torproject.org/getinvolved/mirrors.html.en

Wherever you download from, please verify the hash and signature.
https://www.torproject.org/docs/verifying-signatures.html.en

It should support that platform / architecture, yes.

>That looks like a mirror of the website
But isn't listed by Tor project as a mirror.

>Wherever you download from, please verify the hash and signature.
Most people won't do this

Just download from https://torproject.org or choose from any of the Tor project's mirrors at https://www.torproject.org/getinvolved/mirrors.html.en

ITsn is listed on the mirrors page, there just doesn't happen to be a column for onion addresses.

But you're right to say that if you can access the .onion, you should be to just visit torproject.org and get it from there.

I think TM is one of the most promising projects Tor Project is doing right now, so very glad to see the next edition coming out!

@ Sukhbir: are you using Shamir Secret Sharing Scheme (SSSS) or similar to take modest precautions against "rubber hose cryptanalysis" by US, RU, CN, etc. governments?

The fear is that an agency like FBI could try to compel you on pain of indefinite imprisonment without trial or something like that to misuse your authentic signing key to sign FBI made/bought malware disguised as a genuine TM tarball. Further, if the legal coercion came in the form on an NSL (National Security Letter) you would be forbidden from every revealing to anyone, even your lawyer, that you had been served with a demand and a gag order. If that happens I strongly encourage you to immediately disobey the gag order and tell everyone because no-one ever had the courage to do that and I strongly suspect that courts might surprise everyone by ruling that the NSL gag orders violate the US Constitution. (IANAL so that guess represents a political judgment, not a legal one!)

SSSS (look in the Debian software repositories) provides some protection against that by allowing trusted people in distinct jurisdictions (e.g. people in US, Norway, RU, CN, Brazil all must sign, and we hope those countries would not all cooperate in rubber hose cryptanalysis to obtain a backdoor in TM, targeted or otherwise). And there is Cothority, which offers massively scaled SSSS-like "witnessing":

http://arstechnica.com/security/2016/03/cothority-to-apple-lets-make-se…
Cothority to Apple: Let’s make secret backdoors impossible
Decentralized cosigning could make it tough for government to gain access.
J.M. Porup (UK)
10 Mar 2016

> Cothority decentralises the signing process, and scales to thousands of cosigners. For instance, in order to authenticate a software update, Apple might require 51 percent of 8,000 cosigners distributed around the world.

See https://blog.torproject.org/blog/deterministic-builds-part-one-cyberwar…

Already, the Linux builds of Tor Messenger are reproducible.

Anonymous

March 11, 2016

In reply to by arlo

Permalink

@ arlo:

Apparently I was insufficiently clear and I regret that.

The deterministic builds project is much needed and long overdue, but as I understand it, this project addresses a very different kind of threat: "scenarios where malware sneaks into a development dependency through an exploit in combination with code injection, and makes its way into the build process of software that is critical to the function of the world economy" (according to Mike Perry's blog post which you cited in the link).

That is a very serious (and all too plausible) potential threat, but it is quite different from the very serious (and too plausible) potential threat I was talking about, which is a kind of "rubber hose cryptanalysis" in which a key developer (or a set of same) is forced by some government (or coalition of governments) to misuse their *genuine* cryptographic signing key by signing a version of their product which has been "backdoored" by some government malware-as-a-service contractor. Wary users who check the gpg signature before installing would still be fooled because the "bad" version has been signed with the *genuine* signing key.

See this explainer:

http://arstechnica.com/security/2016/02/most-software-already-has-a-gol…
Most software already has a “golden key” backdoor: the system update
Software updates are just another term for cryptographic single-points-of-failure.
Leif Ryge
27 Feb 2016

> Q: What does almost every piece of software with an update mechanism, including every popular operating system, have in common?
>
> A: Secure golden keys, cryptographic single-points-of-failure which can be used to enable total system compromise via targeted malicious software updates.
>
> I'll define those terms: By "malicious software update," I mean that someone tricks your computer into installing an inauthentic version of some software which causes your computer to do things you don't want it to do. A "targeted malicious software update" means that only the attacker's intended target(s) will receive the update, which greatly decreases the likelihood of anyone ever noticing it. To perform a targeted malicious software update, an attacker needs two things: (1) to be in a position to supply the update and (2) to be able to convince the victim's existing software that the malicious update is authentic. Finally, by "total system compromise" I mean that the attacker obtains all of the authority held by the program they're impersonating an update to. In the case of an operating system, this means that the attacker can subvert any application on that computer and obtain any encryption keys or other unencrypted data that the application has access to.
>
> A backdoored encryption system which allows attackers to decrypt arbitrary data that their targets have encrypted is a significantly different kind of capability than a backdoor which allows attackers to run arbitrary software on their targets' computers. I think many informed people discussing The Washington Post's request for a "secure golden key" assumed they were talking about the former type of backdoor, though it isn't clear to me if the editorial's authors actually understand the difference.
>
> From an attacker perspective, each capability has some advantages. The former allows for passively-collected encrypted communications and other surreptitiously obtained encrypted data to be decrypted. The latter can only be used when the necessary conditions exist for an active attack to be executed, but when those conditions exist it allows for much more than mere access to already-obtained-but-encrypted data. Any data on the device can be exfiltrated, including encryption keys and new data which can be collected from attached microphones, cameras, or other peripherals.
>
> Many software projects have only begun attempting to verify the authenticity of their updates in recent years. But even among projects that have been trying to do it for decades, most still have single points of devastating failure.
>
> In some systems there are a number of keys where if any one of them is compromised such an attack becomes possible. In other cases it might be that signatures from two or even three keys are necessary, but when those keys are all controlled by the same company (or perhaps even the same person) the system still has single points of failure.
>
> This problem exists in almost every update system in wide use today. Even my favorite operating system, Debian, has this problem. If you use Debian or a Debian derivative like Ubuntu, you can see how many single points of failure you have in your update authenticity mechanism with this command:
> ...

As Ryge explains, something like Shamir's Secret Sharing System (SSSS in Debian repository) can help combat the threat I am talking about, by distributing "shares" of a single secret, such as a gpg signing key, to a number of people in various countries, some minimal number of whom can combine their shares to recreate the secret in order to sign software. But when your enemy is the USG, even with SSSS it would not be easy to guarantee that USG could not force developers to misuse their shares, because it would be very difficult to name a set of countries in which key developers already live, which cannot be pressured by FBI into cooperating in "rubber hose cryptanalysis" of the kind described by Ryge.

The potential threat I am talking about, that Ryge is talking about, affects all Linux distributions which use a package manager, all Open Source software projects like Tails which offer a gpg signed ISO image of a specialized Linux distribution for at-risk users, all Open Source software projects such as Tor Project which offer cryptographically signed tarballs such as TBB or TM tarballs for download by ordinary citizens who need privacy/anonymity/security. And all users of a smart phone who download cryptographically signed software upgrades from the phone's manufacturer. All users of a router who download cryptographically signed firmware upgrades. Pretty much everyone who uses an electronic device which accepts cryptographically signed upgrades.

This threat is not hypothetical, as FBI's demands to Apple show.

The potential threat Mike Perry is talking about in the cited blog post is also rather general, and I certainly regard it as non-hypothetical and very serious. But it is a different threat which calls for a different response.

No, you were clear. I just think that reproducible builds solve the same problem. Ideally, before Sukhbir signs each build, auditors will reproduce it and compare hashes. If they don't match, they'll blow the whistle.

> Ideally, before Sukhbir signs each build, auditors will reproduce it and compare hashes. If they don't match, they'll blow the whistle.

I think I understand your point, and agree as far as it goes, but it seems that keeping the signing key distributed (e.g. using something like SSS) is still a very good idea, and I hope you will consider it.

Are the auditors geographically distributed? Not all in countries where Comey can force the local government to apply "rubber hose cryptanalysis" by forcing an auditor to abuse their privileges by falsely stating that their build matched the expected hash?

A tricky issue with SSSS and allied schemes: you want to set the number of required shares sufficiently low so that if the USG organizes a global roundup of all Tor devs they can arrest, there will still be sufficiently many survivors to carry on and sign the next bundle of TM or TB, or the next ISO image of Tails. But you want to set it high enough (ideally with some geographical constraints also) so that FBI will find it difficult to pressure enough governments to round up all the devs at once.

Hope this is clear, its hard to describe in words late in the day.

We will consider it, thanks.

I think it's difficult to use SSSS to keep the signing key distributed. At some point someone has to reassemble the signing key on some computer to sign the release, and then we have a single person with a copy of the signing key, which is no longer distributed.

What we can do however is having the build signed by multiple people. We do it for Tor Browser. We should do it for Tor Messenger too, but first we need to have the builds for all platforms reproducible (currently only the Linux builds are, so the sha256sums.txt files which contains builds for all platforms wouldn't be matching).

But the signature of official builders is not all. We also hope that other people that we don't know will verify our builds anonymously, and say something if something seems wrong.

> I think it's difficult to use SSSS to keep the signing key distributed. At some point someone has to reassemble the signing key on some computer to sign the release, and then we have a single person with a copy of the signing key, which is no longer distributed.

Point taken. Perhaps SSSS used *in isolation* is really more suitable for a Board of Trustees which wishes to be able to reconstruct a copy of the master key to the company networks in case their cybersecurity chief "breaks bad" (c.f. the experience of San Francisco some years ago).

> What we can do however is having the build signed by multiple people. We do it for Tor Browser. We should do it for Tor Messenger too, but first we need to have the builds for all platforms reproducible (currently only the Linux builds are, so the sha256sums.txt files which contains builds for all platforms wouldn't be matching).
>
> But the signature of official builders is not all. We also hope that other people that we don't know will verify our builds anonymously, and say something if something seems wrong

I think Open Source community must consider every available technical tool, legal strategm, geolocational distribution, to come up with a workable solution to the threat from "rubber hose" breakage of the authentication and data integrity functions of cryptography.

It worries me, boklm, that no-one at Tor Project has yet clearly stated that they even understand the nature of this threat. If that's true, there is no way you can prepare defenses against it. That's terrible because I believe you may have only weeks or months before you (and some or all of your users) fall victim to coerced cooperation by

o key Tor Project people in abusing cryptographic signing keys to sign maliciously modified TP products (TB, TM, tor client, tor server software) provided by the bad guys,

o key certificate authority people is abusing signing keys by signing bad certs produced by the bad guys.

Here is that link again:

http://arstechnica.co.uk/security/2016/02/most-software-already-has-a-g…
Most software already has a “golden key” backdoor—it’s called auto update
Software updates are just another term for cryptographic single-points-of-failure.
Leif Ryge
27 Feb 2016

We understand the nature of this threat, and this is why we do reproducible builds and encourage people to verify them.

There is also some plan to verify the updates through the Tor consensus:
https://trac.torproject.org/projects/tor/ticket/10393

> No, you were clear. I just think that reproducible builds solve the same problem.

I don't think that's true. If I'm wrong you need to explain this:

> Ideally, before Sukhbir signs each build, auditors will reproduce it and compare hashes. If they don't match, they'll blow the whistle.

Here is how Leif Ryge (I think some Tor people know him) described the threat:

http://arstechnica.co.uk/security/2016/02/most-software-already-has-a-g…
Most software already has a “golden key” backdoor—it’s called auto update
Software updates are just another term for cryptographic single-points-of-failure.
Leif Ryge
27 Feb 2016

> Q: What does almost every piece of software with an update mechanism, including every popular operating system, have in common?
>
> A: Secure golden keys, cryptographic single-points-of-failure which can be used to enable total system compromise via targeted malicious software updates.
>
> I'll define those terms: By "malicious software update," I mean that someone tricks your computer into installing an inauthentic version of some software which causes your computer to do things you don't want it to do. A "targeted malicious software update" means that only the attacker's intended target(s) will receive the update, which greatly decreases the likelihood of anyone ever noticing it. To perform a targeted malicious software update, an attacker needs two things: (1) to be in a position to supply the update and (2) to be able to convince the victim's existing software that the malicious update is authentic. Finally, by "total system compromise" I mean that the attacker obtains all of the authority held by the program they're impersonating an update to. In the case of an operating system, this means that the attacker can subvert any application on that computer and obtain any encryption keys or other unencrypted data that the application has access to.
>
> A backdoored encryption system which allows attackers to decrypt arbitrary data that their targets have encrypted is a significantly different kind of capability than a backdoor which allows attackers to run arbitrary software on their targets' computers. I think many informed people discussing The Washington Post's request for a "secure golden key" assumed they were talking about the former type of backdoor, though it isn't clear to me if the editorial's authors actually understand the difference.
> ...
> Many software projects have only begun attempting to verify the authenticity of their updates in recent years. But even among projects that have been trying to do it for decades, most still have single points of devastating failure.
>
> In some systems there are a number of keys where if any one of them is compromised such an attack becomes possible. In other cases it might be that signatures from two or even three keys are necessary, but when those keys are all controlled by the same company (or perhaps even the same person) the system still has single points of failure.
>
> This problem exists in almost every update system in wide use today. Even my favorite operating system, Debian, has this problem.
>
> [Software developers] probably thought they would be able keep the keys safe against realistic attacks, and they didn't consider the possibility that their governments would actually compel them to use their keys to sign malicious updates.
>
> Fortunately, there is some good news. The FBI is presently demonstrating that this was never a good assumption, which finally means that the people who have been saying for a long time that we need to remove these single points of failure can't be dismissed as unreasonably paranoid anymore.

Here is one possible scenario:

FBI serves Tor Project with an NSL (which is automatically accompanied by a gag order forbidding recipients from telling anyone about the secret order, on pain of very lengthy prison terms) or some court order (accompanied with a "delayed notification" which can be renewed every 90 days simply by FBI asking for a renewal) ordering developers to use their authentic signing key(s) to sign a version of latest TM or TB tarball which has been provided by FBI, and which contains hidden malware. Facing indefinite imprisonment or worse--- perhaps they are sitting in jail and told they will remain there unless they agree to cooperate--- the developers sign the maliciously modified tarball. That's "rubber hose" breakage of the *authentication* function of gpg.

With NSA help, FBI then manages to divert https connections by targeted Tor users attempting to download the next edition of TB or TB tarball to their own site, where they are served the trojaned version of the tarball. Perhaps NSA has compelled the Certificate Authority to use *their* genuine signing key to validate FBI's fake torproject.org PEM certificate, thus fooling users into thinking they are connected to the genuine website.

Since the detached signature verifies against the genuine signing key, the targeted users accept the "updated" as genuine.

Meanwhile, the "verifiers" you describe get the genuine tarball like almost everyone else, and thus fail to detect the targeted attack, and because of the gag order they have no way of knowing you were forced to sign malware with your authentic key.

Another scenario: FBI not only compels you to use your authentic key to sign maliciously modified tarballs, they secretly seize the domain torproject.org and serve the bad tarballs to *everyone* (no bad PEM needed).

Maybe I misunderstood what you said, but I am not sure the verifiers would catch the deception in this case either, because FBI has complete control over your persons and your domain, so they can manipulate all your communications with the verifiers.

could you add an option to enable logs again? This might be slightly against the concept, but I can't trust other ppl I can with to have logging disabled anyway.

One of the properties that OTR gives you is that any transcript your contact produces is deniable. See https://otr.cypherpunks.ca

But if you're really adamant about it, you can always open the Config Editor and flip the logging preferences. See https://trac.torproject.org/projects/tor/ticket/10939 for which ones.

I tried this a couple months ago, it ostensibly re-enables logging but it's functionally unusable. It overwrites the logfile every time the app launches, the contents are JSON, not human-readable text, and it's buried deep inside the application directory.

Logging is very important to average users. That I can't get a meaningful log is keeping me from recommending it to others I'd like to have using it. (I'm not the OP here, either.)

You can open the context menu (right click) of a conversation and select "Show Logs" to view them in a human-readable way in the application itself, right? (Not sure if you're aware of that.) Also, there are many tools to process JSON to a more readable format. You can also change the `purple.logging.format` pref from `json` to `txt`, if you really want, but I'm not sure how much longer that'll work. I believe the structured logging is by design.

But, more substantively, there're a number of bugs in the way before we can safely reenable logging,
https://bugzilla.mozilla.org/show_bug.cgi?id=1175706
https://bugzilla.mozilla.org/show_bug.cgi?id=1175374
https://github.com/arlolra/ctypes-otr/issues/49

Thank you for the feedback though. It's helping me gauge the issue.

The menu from an item in the contacts list does bring up Show Logs, thank you! And the viewer respects my font settings, which is terrific. It would be good of course to have it available by a regular menu and keyboard shortcut so users can find it in their preferred way. (Which you can probably guess hunting for stuff behind mouse buttons isn't mine.)

Correction: The files aren't being overwritten anymore. But the rest is the same.

On OS X, the logs are in

  1. <br />
  2. /Applications/Tor Messenger.app/Contents/TorMessenger/Data/Browser/[myprofile].default/logs/jabber/<a href="mailto:myuser@domain" rel="nofollow">myuser@domain</a>/<a href="mailto:otheruser@domain" rel="nofollow">otheruser@domain</a><br />

sample:

  1. <br />
  2. {"date":"2016-03-11T17:19:54.000Z","who":"<a href="mailto:myuser@domain" rel="nofollow">myuser@domain</a>/Instantbird","text":"what i typed","flags":["outgoing"],"alias":"myuser"}<br />

> I can't trust other ppl I can with to have logging disabled anyway.

I have sometimes told people in chats that I was not logging, and I really was not. You probably meant "I cannot be certain that other people I chat with have disabled logging" and of course I agree with that.

Who is it, if I have defined multiple irc accounts, so they share the same tor circuit when connecting to the network?

I'm not sure exactly what's being asked, but it sounds related to https://trac.torproject.org/projects/tor/ticket/14382

I have few accounts on the same XMPP server which I don't want to associate one with other (the server is the adversary). Could I get different tor circuits for them in this case? Now I could use different boundles for each account to specify differnent Tor SOCKS host and port for them, but it is quite unconvenient. I would like to see a possibility to specify different

  • SOCKS host
  • SOCKS port
  • OTR key
  • OTR fingerprints

for different accounts in the same tor messenger.

Then, the second question: Tools → Preferences → Content → "Send these fonts and colors as part of my messages when possible". ← What is this? Is it safe to allow? Why should I send any info about my fonts and colors to somebody? The interface should be standard for everybody.

P.S. Thanks for the project! I waited for a long time any officially supported messenger for Tor. Now I'm happy the project exists and evolves.

Ok, yes, that's along the lines of what Yawning is describing in that stream isolation bug (#14382) and sounds like something we'll eventually want to implement.

I opened a bug for your second point. Good catch! That should be disabled. See https://trac.torproject.org/projects/tor/ticket/18533

Glad you're enjoying the app.

Thanks for opening the ticket and addling links to my comments! I clarified it there a bit better: https://trac.torproject.org/projects/tor/ticket/14382#comment:7

Thanks for making Tor Messenger :) running it for a few weeks 24/7 now and it works perfectly.

To defeat browser profiling, inject javascript to make random micro mousemoves added &/or subtracted to or from each real move to block "mousemove finger printing", also allow browser to lie about plugins,
randomized plugins li
st per site etc.

Someone else already cited the link, but here are the details of the cited attack:

http://jcarlosnorte.com/security/2016/03/06/advanced-tor-browser-finger…
Advanced Tor Browser Fingerprinting
6 March 2016

> The ability to privately communicate through the internet is very important for dissidents living under authoritary regimes, activists and basically everyone concerned about internet privacy.
>
> While the TOR network itself provides a good level of privacy, making difficult or even practically impossible to discover the real I.P. address of the tor users, this is by no means enough to protect users privacy on the web. When browsing the web, your identity can be discovered using browser exploits, cookies, browser history, browser plugins, etc.

Tor Project is tracking this issue so we hope the next edition will field countermeasures.