arma's blog

Tor 0.2.5.12 and 0.2.6.7 are released

Tor 0.2.5.12 and 0.2.6.7 fix two security issues that could be used by an attacker to crash hidden services, or crash clients visiting hidden services. Hidden services should upgrade as soon as possible; clients should upgrade whenever packages become available.

These releases also contain two simple improvements to make hidden services a bit less vulnerable to denial-of-service attacks.

We also made a Tor 0.2.4.27 release so that Debian stable can easily integrate these fixes.

The Tor Browser team is currently evaluating whether to put out a new Tor Browser stable release with these fixes, or wait until next week for their scheduled next stable release. (The bugs can introduce hassles for users, but we don't currently view them as introducing any threats to anonymity.)

Changes in version 0.2.5.12 - 2015-04-06

  • Major bugfixes (security, hidden service):
    • Fix an issue that would allow a malicious client to trigger an assertion failure and halt a hidden service. Fixes bug 15600; bugfix on 0.2.1.6-alpha. Reported by "disgleirio".
    • Fix a bug that could cause a client to crash with an assertion failure when parsing a malformed hidden service descriptor. Fixes bug 15601; bugfix on 0.2.1.5-alpha. Found by "DonnchaC".
  • Minor features (DoS-resistance, hidden service):
    • Introduction points no longer allow multiple INTRODUCE1 cells to arrive on the same circuit. This should make it more expensive for attackers to overwhelm hidden services with introductions. Resolves ticket 15515.

Changes in version 0.2.6.7 - 2015-04-06

  • Major bugfixes (security, hidden service):
    • Fix an issue that would allow a malicious client to trigger an assertion failure and halt a hidden service. Fixes bug 15600; bugfix on 0.2.1.6-alpha. Reported by "disgleirio".
    • Fix a bug that could cause a client to crash with an assertion failure when parsing a malformed hidden service descriptor. Fixes bug 15601; bugfix on 0.2.1.5-alpha. Found by "DonnchaC".
  • Minor features (DoS-resistance, hidden service):
    • Introduction points no longer allow multiple INTRODUCE1 cells to arrive on the same circuit. This should make it more expensive for attackers to overwhelm hidden services with introductions. Resolves ticket 15515.
    • Decrease the amount of reattempts that a hidden service performs when its rendezvous circuits fail. This reduces the computational cost for running a hidden service under heavy load. Resolves ticket 11447.

Possible upcoming attempts to disable the Tor network

The Tor Project has learned that there may be an attempt to incapacitate our network in the next few days through the seizure of specialized servers in the network called directory authorities. (Directory authorities help Tor clients learn the list of relays that make up the Tor network.) We are taking steps now to ensure the safety of our users, and our system is already built to be redundant so that users maintain anonymity even if the network is attacked. Tor remains safe to use.

We hope that this attack doesn't occur; Tor is used by many good people. If the network is affected, we will immediately inform users via this blog and our Twitter feed @TorProject, along with more information if we become aware of any related risks to Tor users.

The Tor network provides a safe haven from surveillance, censorship, and computer network exploitation for millions of people who live in repressive regimes, including human rights activists in countries such as Iran, Syria, and Russia. People use the Tor network every day to conduct their daily business without fear that their online activities and speech (Facebook posts, email, Twitter feeds) will be tracked and used against them later. Millions more also use the Tor network at their local internet cafe to stay safe for ordinary web browsing.

Tor is also used by banks, diplomatic officials, members of law enforcement, bloggers, and many others. Attempts to disable the Tor network would interfere with all of these users, not just ones disliked by the attacker.

Every person has the right to privacy. This right is a foundation of a democratic society. For example, if Members of the British Parliament or US Congress cannot share ideas and opinions free of government spying, then they cannot remain independent from other branches of government. If journalists are unable to keep their sources confidential, then the ability of the press to check the power of the government is compromised. If human rights workers can't report evidence of possible crimes against humanity, it is impossible for other bodies to examine this evidence and to react. In the service of justice, we believe that the answer is to open up communication lines for everyone, securely and anonymously.

The Tor network provides online anonymity and privacy that allow freedom for everyone. Like freedom of speech, online privacy is a right for all.

[Update Monday Dec 22: So far all is quiet on the directory authority front, and no news is good news.]
[Update Sunday Dec 28: Still quiet. This is good.]

Solidarity against online harassment

One of our colleagues has been the target of a sustained campaign of harassment for the past several months. We have decided to publish this statement to publicly declare our support for her, for every member of our organization, and for every member of our community who experiences this harassment. She is not alone and her experience has catalyzed us to action. This statement is a start.

The Tor Project works to create ways to bypass censorship and ensure anonymity on the Internet. Our software is used by journalists, human rights defenders, members of law enforcement, diplomatic officials, and many others. We do high-profile work, and over the past years, many of us have been the targets of online harassment. The current incidents come at a time when suspicion, slander, and threats are endemic to the online world. They create an environment where the malicious feel safe and the misguided feel justified in striking out online with a thousand blows. Under such attacks, many people have suffered — especially women who speak up online. Women who work on Tor are targeted, degraded, minimized and endure serious, frightening threats.

This is the status quo for a large part of the internet. We will not accept it.

We work on anonymity technology because we believe in empowering people. This empowerment is the beginning and a means, not the end of the discussion. Each person who has power to speak freely on the net also has the power to hurt and harm. Merely because one is free to say a thing does not mean that it should be tolerated or considered reasonable. Our commitment to building and promoting strong anonymity technology is absolute. We have decided that it is not enough for us to work to protect the world from snoops and censors; we must also stand up to protect one another from harassment.

It's true that we ourselves are far from perfect. Some of us have written thoughtless things about members of our own community, have judged prematurely, or conflated an idea we hated with the person holding it. Therefore, in categorically condemning the urge to harass, we mean categorically: we will neither tolerate it in others, nor will we accept it among ourselves. We are dedicated to both protecting our employees and colleagues from violence, and trying to foster more positive and mindful behavior online ourselves.

Further, we will no longer hold back out of fear or uncertainty from an opportunity to defend a member of our community online. We write tools to provide online freedom but we don't endorse online or offline abuse. Similarly, in the offline world, we support freedom of speech but we oppose the abuse and harassment of women and others. We know that online harassment is one small piece of the larger struggle that women, people of color, and others face against sexism, racism, homophobia and other bigotry.

This declaration is not the last word, but a beginning: We will not tolerate harassment of our people. We are working within our community to devise ways to concretely support people who suffer from online harassment; this statement is part of that discussion. We hope it will contribute to the larger public conversation about online harassment and we encourage other organizations to sign on to it or write one of their own.

For questions about Tor, its work, its staff, its funding, or its world view, we encourage people to directly contact us (Media contact: Kate Krauss, press @ torproject.org). We also encourage people join our community and to be a part of our discussions:
https://www.torproject.org/about/contact
https://www.torproject.org/docs/documentation#MailingLists



In solidarity against online harassment,

Roger Dingledine
Nick Mathewson
Kate Krauss
Wendy Seltzer
Caspar Bowden
Rabbi Rob Thomas
Karsten Loesing
Matthew Finkel
Griffin Boyce
Colin Childs
Georg Koppen
Tom Ritter
Erinn Clark
David Goulet
Nima Fatemi
Steven Murdoch
Linus Nordberg
Arthur Edelstein
Aaron Gibson
Anonymous Supporter
Matt Pagan
Philipp Winter
Sina Rabbani
Jacob Appelbaum
Karen Reilly
Meredith Hoban Dunn
Moritz Bartl
Mike Perry
Sukhbir Singh
Sebastian Hahn
Nicolas Vigier
Nathan Freitas
meejah
Leif Ryge
Runa Sandvik
Andrea Shepard
Isis Agora Lovecruft
Arlo Breault
Ásta Helgadóttir
Mark Smith
Bruce Leidl
Dave Ahmad
Micah Lee
Sherief Alaa
Virgil Griffith
Rachel Greenstadt
Andre Meister
Andy Isaacson
Gavin Andresen
Scott Herbert
Colin Mahns
John Schriner
David Stainton
Doug Eddy
Pepijn Le Heux
Priscilla Oppenheimer
Ian Goldberg
Rebecca MacKinnon
Nadia Heninger
Cory Svensson
Alison Macrina
Arturo Filastò
Collin Anderson
Andrew Jones
Eva Blum-Dumontet
Jan Bultmann
Murtaza Hussain
Duncan Bailey
Sarah Harrison
Tom van der Woerdt
Jeroen Massar
Brendan Eich
Joseph Lorenzo Hall
Jean Camp
Joanna Rutkowska
Daira Hopwood
William Gillis
Adrian Short
Bethany Horne
Andrea Forte
Hernán Foffani
Nadim Kobeissi
Jakub Dalek
Rafik Naccache
Nathalie Margi
Asheesh Laroia
Ali Mirjamali
Huong Nguyen
Meerim Ilyas
Timothy Yim
Mallory Knodel
Randy Bush
Zachary Weinberg
Claudio Guarnieri
Steven Zikopoulos
Michael Ceglar
Henry de Valence
Zachariah Gibbens
Jeremy M. Harmer
Ilias Bartolini
René Pfeiffer
Percy Wegmann
Tim Sammut
Neel Chauhan
Matthew Puckey
Taylor R Campbell
Klaus Layer
Colin Teberg
Jeremy Gillula
Will Scott
Tom Lowenthal
Rishab Nithyanand
Brinly Taylor
Craig Colman-Shepherd
A. Lizard
M. C. McGrath
Ross MacDonald
Esra'a Al Shafei
Gulnara Yunusova
Ben Laurie
Christian Vandrei
Tanja Lange
Markus Kitsinger
Harper Reed
Mark Giannullo
Alyssa Rowan
Daniel Gall
Kathryn Cramer
Camilo Galdos AkA Dedalo
Ralf-Philipp Weinmann
Miod Vallat
Carlotta Negri
Frederic Jacobs
Susan Landau
Jan Weiher
Donald A. Byrd
Jesin A.
Thomas Blanchard
Matthijs Pontier
Rohan Nagel
Cyril Brulebois
Neal Rauhauser
Sonia Ballesteros Rey
Florian Schmitt
Abdoulaye Bah
Simone Basso
Charlie Smith
Steve Engledow
Michael Brennan
Jeffrey Landale
Sophie Toupin
Dana Lane Taylor
Nagy Gabor
Shaf Patel
Augusto Amaral
Robin Molnar
Jesús Cea Avión
praxis journal
Jens Stomber
Noam Roberts
Ken Arroyo Ohori
Brian Kroll
Shawn Newell
Rasmus Vuori
Alexandre Guédon
Seamus Tuohy
Virginia Lange
Nicolas Sera-Leyva
Jonah Silas Sheridan
Ross McElvenny
Aaron Zauner
Christophe Moille
Micah Sherr
Gabriel Rocha
Yael Grauer
Kenneth Freeman
Dennis Winter
justaguy
Lee Azzarello
Zaki Manian
Aaron Turner
Greg Slepak
Ethan Zuckerman
Pasq Gero
Pablo Suárez-Serrato
Kerry Rutherford
Andrés Delgado
Tommy Collison
Dan Luedders
Flávio Amieiro
Ulrike Reinhard
Melissa Anelli
Bryan Fordham
Nate Perkins
Jon Blanchard
Jonathan Proulx
Bunty Saini
Daniel Crowley
Matt Price
Charlie McConnell
Chuck Peters
Ejaz Ahmed
Laura Poitras
Benet Hitchcock
Dave Williams
Jane Avriette
Renata Avila
Sandra Ordonez
David Palma
Andre N Batista
Steve Bellovin
James Renken
Alyzande Renard
Patrick Logan
Rory Byrne
Holly Kilroy
Phillipa Gill
Mirimir
Leah Carey
Josh Steiner
Benjamin Mako Hill
Nick Feamster
Dominic Corriveau
Adrienne Porter Felt
str4d
Allen Gunn
Eric S Johnson
Hanno Wagner
Anders Hansen
Alexandra Stein
Tyler H. Meers
Shumon Huque
James Vasile
Andreas Kinne
Johannes Schilling
Niels ten Oever
David W. Deitch
Dan Wallach
Jon Penney
Starchy Grant
Damon McCoy
David Yip
Adam Fisk
Jon Callas
Aleecia M. McDonald
Marina Brown
Wolfgang Britzl
Chris Jones
Heiko Linke
David Van Horn
Larry Brandt
Matt Blaze
Radek Valasek
skruffy
Galou Gentil
Douglas Perkins
Jude Burger
Myriam Michel
Jillian York
Michalis Polychronakis
SilenceEngaged
Kostas Jakeliunas
Sebastiaan Provost
Sebastian Maryniak
Clytie Siddall
Claudio Agosti
Peter Laur
Maarten Eyskens
Tobias Pulls
Sacha van Geffen
Cory Doctorow
Tom Knoth
Fredrik Julie Andersson
Nighat Dad
Josh L Glenn
Vernon Tang
Jennifer Radloff
Domenico Lupinetti
Martijn Grooten
Rachel Haywire
eliaz
Christoph Maria Sommer
J Duncan
Michael Kennedy Brodhead
Mansour Moufid
Melissa Elliott
Mick Morgan
Brenno de Winter
George Scriban
Ryan Harris
Ricard S. Colorado
Julian Oliver
Sebastian "bastik" G.
Te Rangikaiwhiria Kemara
Koen Van Impe
Kevin Gallagher
Sven "DrMcCoy" Hesse
Pavel Schamberger
Phillip M. Pether
Joe P. Lee
Stephanie Hyland
Maya Ganesh
Greg Bonett
Amadou Lamine Badji
Vasil Kolev
Jérémie Zimmermann
Cally Gordon
Hakisho Nukama
Daniel C Howe
Douglas Stebila
Jennifer Rexford
Nayantara Mallesh
Valeria de Paiva
Tim Bulow
Meredith Whittaker
Max Hunter
Maja Lampe
Thomas Ristenpart
Lisa Wright
August Germar
Ronald Deibert
Harlan Lieberman-Berg
Alan L. Stewart
Alexander Muentz
Erin Benson
Carmela Troncoso
David Molnar
Holger Levsen
Peter Grombach
John McIntyre
Lisa Geelan
Antonius Kies
Jörg Kruse
Arnold Top
Vladimir G. Ivanovic
Ahmet A. Sabancı
Henriette Hofmeier
Ethan Heilman
Daniël Verhoeven
Alex Shepard
Max Maass
Ed Agro
Andrew Heist
Patrick McDonald
Lluís Sala
Laurelai Bailey
Ghost
José Manuel Cerqueira Esteves
Fabio Pietrosanti
Cobus Carstens
Harald Lampesberger
Douwe Schmidt
Sascha Meinrath
C. Waters
Bruce Schneier
George Danezis
Claudia Diaz
Kelley Misata
Denise Mangold
Owen Blacker
Zach Wick
Gustavo Gus
Alexander Dietrich
Frank Smyth
Dafne Sabanes Plou
Steve Giovannetti
Grit Hemmelrath
Masashi Crete-Nishihata
Michael Carbone
Amie Stepanovich
Kaustubh Srikanth
arlen
Enrique Piracés
Antoine Beaupré
Daniel Kahn Gillmor
Richard Johnson
Ashok Gupta
Alex Halderman
Brett Solomon
Raegan MacDonald
Joseph Steele
Marie Gutbub
Valeria Betancourt
Konstantin Müller
Emma Persky
Steve Wyshywaniuk
Tara Whalen
Joe Justen
Susan Kentner
Josh King
Juha Nurmi
John Saylor
Jurre van Bergen
Saedu Haiza
Anders Damsgaard
Sadia Afroz
Nat Meysenburg
x3j11
Julian Assange
Skyhighatrist
Dan Staples
Grady Johnson
Matthew Green
Cameron Williams
Roy Johnson
Laura S Potter-Brown
Meredith L. Patterson
Casey Dunham
Raymond Johansen
Kieran Thandi
Jason Gulledge
Matt Weeks
Khalil Sehnaoui
Brennan Novak
Casey Jones
Jesse Victors
Peter DeChristo
Nick Black
Štefan Gurský
Glenn Greenwald
hinterland3r
Russell Handorf
Lisa D Lowe
Harry Halpin
Cooper Quintin
Mark Burdett
Conrad Corpus
Steve Revilak
Nate Shiff
Annie Zaman
Matthew Miller (Fedora Project)
David Fetter
Gabriella Biella Coleman
Ryan Lackey
Peter Clemenko
Serge Egelman
David Robinson
Sasa Savic
James McWilliams
Arrigo Triulzi
Kevin Bowen
Kevin Carson
Sajeeb Bhowmick
Dominik Rehm
William J. Coldwell
Niall Madhoo
Christoph Mayer
Simone Fischer-Hübner
George W. Maschke
Jens Kubieziel
Dan Hanley
Robin Jacks
Zenaan Harkness
Pete Newell
Aaron Michael Johnson
Kitty Hundal
Sabine "Atari-Frosch" Engelhardt
Wilton Gorske
Lukas Lamla
Kat Hanna
Polly Powledge
Sven Guckes
Georgia Bullen
Vladan Joler
Eric Schaefer
Ly Ngoc Quan Ly
Martin Kepplinger
Freddy Martinez
David Haren
Simon Richter
Brighid Burns
Peter Holmelin
Davide Barbato
Neil McKay
Joss Wright
Troy Toman
Morana Miljanovic
Simson Garfinkel
Harry Hochheiser
Malte Dik
Tails project
„nuocu
Kurt Weisman
BlacquePhalcon
Shaikh Rafia
Olivier Brewaeys
Sander Venema
James Murphy
Chris "The Paucie" Pauciello
Syrup-tan
Brad Parfitt
Jerry Whiting
Massachusetts Pirate Party
András Stribik
Alden Page
Juris Vetra
Zooko Wilcox-O'Hearn
Marcel de Groot
Ryan Henry
Joy Lowell
Guilhem Moulin
Werner Jacob
Tansingh S. Partiman
Bryce Alexander Lynch
Robert Guerra
John Tait
Sebastian Urbach
Atro Tossavainen
Alexei Czeskis
Greg Norcie
Greg Metcalfe
Benjamin Chrobot
Lorrie Faith Cranor
Jamie D. Thomas
EJ Infeld
Douglas Edwards
Cody Celine
Ty Bross
Matthew Garrett
Sam P.
Vidar Waagbø
Raoul Unger
Aleksandar Todorović
John Olinda
Graham Perkins
Casa Casanova
James Turnbull
Eric Hogue
Jacobo Nájera
Ben Adida


If you would like to be on this list of signers (please do — you don't have to be a part of Tor to sign on!), please reach us at tor-assistants @ torproject.org.

Traffic correlation using netflows

People are starting to ask us about a recent tech report from Sambuddho's group about how an attacker with access to many routers around the Internet could gather the netflow logs from these routers and match up Tor flows. It's great to see more research on traffic correlation attacks, especially on attacks that don't need to see the whole flow on each side. But it's also important to realize that traffic correlation attacks are not a new area.

This blog post aims to give you some background to get you up to speed on the topic.

First, you should read the first few paragraphs of the One cell is enough to break Tor's anonymity analysis:

First, remember the basics of how Tor provides anonymity. Tor clients route their traffic over several (usually three) relays, with the goal that no single relay gets to learn both where the user is (call her Alice) and what site she's reaching (call it Bob).

The Tor design doesn't try to protect against an attacker who can see or measure both traffic going into the Tor network and also traffic coming out of the Tor network. That's because if you can see both flows, some simple statistics let you decide whether they match up.

Because we aim to let people browse the web, we can't afford the extra overhead and hours of additional delay that are used in high-latency mix networks like Mixmaster or Mixminion to slow this attack. That's why Tor's security is all about trying to decrease the chances that an adversary will end up in the right positions to see the traffic flows.

The way we generally explain it is that Tor tries to protect against traffic analysis, where an attacker tries to learn whom to investigate, but Tor can't protect against traffic confirmation (also known as end-to-end correlation), where an attacker tries to confirm a hypothesis by monitoring the right locations in the network and then doing the math.

And the math is really effective. There are simple packet counting attacks (Passive Attack Analysis for Connection-Based Anonymity Systems) and moving window averages (Timing Attacks in Low-Latency Mix-Based Systems), but the more recent stuff is downright scary, like Steven Murdoch's PET 2007 paper about achieving high confidence in a correlation attack despite seeing only 1 in 2000 packets on each side (Sampled Traffic Analysis by Internet-Exchange-Level Adversaries).

Second, there's some further discussion about the efficacy of traffic correlation attacks at scale in the Improving Tor's anonymity by changing guard parameters analysis:

Tariq's paper makes two simplifying assumptions when calling an attack successful [...] 2) He assumes that the end-to-end correlation attack (matching up the incoming flow to the outgoing flow) is instantaneous and perfect. [...] The second one ("how successful is the correlation attack at scale?" or maybe better, "how do the false positives in the correlation attack compare to the false negatives?") remains an open research question.

Researchers generally agree that given a handful of traffic flows, it's easy to match them up. But what about the millions of traffic flows we have now? What levels of false positives (algorithm says "match!" when it's wrong) are acceptable to this attacker? Are there some simple, not too burdensome, tricks we can do to drive up the false positives rates, even if we all agree that those tricks wouldn't work in the "just looking at a handful of flows" case?

More precisely, it's possible that correlation attacks don't scale well because as the number of Tor clients grows, the chance that the exit stream actually came from a different Tor client (not the one you're watching) grows. So the confidence in your match needs to grow along with that or your false positive rate will explode. The people who say that correlation attacks don't scale use phrases like "say your correlation attack is 99.9% accurate" when arguing it. The folks who think it does scale use phrases like "I can easily make my correlation attack arbitrarily accurate." My hope is that the reality is somewhere in between — correlation attacks in the current Tor network can probably be made plenty accurate, but perhaps with some simple design changes we can improve the situation.

The discussion of false positives is key to this new paper too: Sambuddho's paper mentions a false positive rate of 6%. That sounds like it means if you see a traffic flow at one side of the Tor network, and you have a set of 100000 flows on the other side and you're trying to find the match, then 6000 of those flows will look like a match. It's easy to see how at scale, this "base rate fallacy" problem could make the attack effectively useless.

And that high false positive rate is not at all surprising, since he is trying to capture only a summary of the flows at each side and then do the correlation using only those summaries. It would be neat (in a theoretical sense) to learn that it works, but it seems to me that there's a lot of work left here in showing that it would work in practice. It also seems likely that his definition of false positive rate and my use of it above don't line up completely: it would be great if somebody here could work on reconciling them.

For a possibly related case where a series of academic research papers misunderstood the base rate fallacy and came to bad conclusions, see Mike's critique of website fingerprinting attacks plus the follow-up paper from CCS this year confirming that he's right.

I should also emphasize that whether this attack can be performed at all has to do with how much of the Internet the adversary is able to measure or control. This diversity question is a large and important one, with lots of attention already. See more discussion here.

In summary, it's great to see more research on traffic confirmation attacks, but a) traffic confirmation attacks are not a new area so don't freak out without actually reading the papers, and b) this particular one, while kind of neat, doesn't supercede all the previous papers.

(I should put in an addendum here for the people who are wondering if everything they read on the Internet in a given week is surely all tied together: we don't have any reason to think that this attack, or one like it, is related to the recent arrests of a few dozen people around the world. So far, all indications are that those arrests are best explained by bad opsec for a few of them, and then those few pointed to the others when they were questioned.)

[Edit: be sure to read Sambuddho's comment below, too. -RD]

Facebook, hidden services, and https certs

Today Facebook unveiled its hidden service that lets users access their website more safely. Users and journalists have been asking for our response; here are some points to help you understand our thinking.

Part one: yes, visiting Facebook over Tor is not a contradiction

I didn't even realize I should include this section, until I heard from a journalist today who hoped to get a quote from me about why Tor users wouldn't ever use Facebook. Putting aside the (still very important) questions of Facebook's privacy habits, their harmful real-name policies, and whether you should or shouldn't tell them anything about you, the key point here is that anonymity isn't just about hiding from your destination.

There's no reason to let your ISP know when or whether you're visiting Facebook. There's no reason for Facebook's upstream ISP, or some agency that surveils the Internet, to learn when and whether you use Facebook. And if you do choose to tell Facebook something about you, there's still no reason to let them automatically discover what city you're in today while you do it.

Also, we should remember that there are some places in the world that can't reach Facebook. Long ago I talked to a Facebook security person who told me a fun story. When he first learned about Tor, he hated and feared it because it "clearly" intended to undermine their business model of learning everything about all their users. Then suddenly Iran blocked Facebook, a good chunk of the Persian Facebook population switched over to reaching Facebook via Tor, and he became a huge Tor fan because otherwise those users would have been cut off. Other countries like China followed a similar pattern after that. This switch in his mind between "Tor as a privacy tool to let users control their own data" to "Tor as a communications tool to give users freedom to choose what sites they visit" is a great example of the diversity of uses for Tor: whatever it is you think Tor is for, I guarantee there's a person out there who uses it for something you haven't considered.

Part two: we're happy to see broader adoption of hidden services

I think it is great for Tor that Facebook has added a .onion address. There are some compelling use cases for hidden services: see for example the ones described at using Tor hidden services for good, as well as upcoming decentralized chat tools like Ricochet where every user is a hidden service, so there's no central point to tap or lean on to retain data. But we haven't really publicized these examples much, especially compared to the publicity that the "I have a website that the man wants to shut down" examples have gotten in recent years.

Hidden services provide a variety of useful security properties. First — and the one that most people think of — because the design uses Tor circuits, it's hard to discover where the service is located in the world. But second, because the address of the service is the hash of its key, they are self-authenticating: if you type in a given .onion address, your Tor client guarantees that it really is talking to the service that knows the private key that corresponds to the address. A third nice feature is that the rendezvous process provides end-to-end encryption, even when the application-level traffic is unencrypted.

So I am excited that this move by Facebook will help to continue opening people's minds about why they might want to offer a hidden service, and help other people think of further novel uses for hidden services.

Another really nice implication here is that Facebook is committing to taking its Tor users seriously. Hundreds of thousands of people have been successfully using Facebook over Tor for years, but in today's era of services like Wikipedia choosing not to accept contributions from users who care about privacy, it is refreshing and heartening to see a large website decide that it's ok for their users to want more safety.

As an addendum to that optimism, I would be really sad if Facebook added a hidden service, had a few problems with trolls, and decided that they should prevent Tor users from using their old https://www.facebook.com/ address. So we should be vigilant in helping Facebook continue to allow Tor users to reach them through either address.

Part three: their vanity address doesn't mean the world has ended

Their hidden service name is "facebookcorewwwi.onion". For a hash of a public key, that sure doesn't look random. Many people have been wondering how they brute forced the entire name.

The short answer is that for the first half of it ("facebook"), which is only 40 bits, they generated keys over and over until they got some keys whose first 40 bits of the hash matched the string they wanted.

Then they had some keys whose name started with "facebook", and they looked at the second half of each of them to pick out the ones with pronouncable and thus memorable syllables. The "corewwwi" one looked best to them — meaning they could come up with a story about why that's a reasonable name for Facebook to use — so they went with it.

So to be clear, they would not be able to produce exactly this name again if they wanted to. They could produce other hashes that start with "facebook" and end with pronouncable syllables, but that's not brute forcing all of the hidden service name (all 80 bits).

For those who want to explore the math more, read about the "birthday attack". And for those who want to learn more (please help!) about the improvements we'd like to make for hidden services, including stronger keys and stronger names, see hidden services need some love and Tor proposal 224.

Part four: what do we think about an https cert for a .onion address?

Facebook didn't just set up a hidden service. They also got an https certificate for their hidden service, and it's signed by Digicert so your browser will accept it. This choice has produced some feisty discussions in the CA/Browser community, which decides what kinds of names can get official certificates. That discussion is still ongoing, but here are my early thoughts on it.

In favor: we, the Internet security community, have taught people that https is necessary and http is scary. So it makes sense that users want to see the string "https" in front of them.

Against: Tor's .onion handshake basically gives you all of that for free, so by encouraging people to pay Digicert we're reinforcing the CA business model when maybe we should be continuing to demonstrate an alternative.

In favor: Actually https does give you a little bit more, in the case where the service (Facebook's webserver farm) isn't in the same location as the Tor program. Remember that there's no requirement for the webserver and the Tor process to be on the same machine, and in a complicated set-up like Facebook's they probably shouldn't be. One could argue that this last mile is inside their corporate network, so who cares if it's unencrypted, but I think the simple phrase "ssl added and removed here" will kill that argument.

Against: if one site gets a cert, it will further reinforce to users that it's "needed", and then the users will start asking other sites why they don't have one. I worry about starting a trend where you need to pay Digicert money to have a hidden service or your users think it's sketchy — especially since hidden services that value their anonymity could have a hard time getting a certificate.

One alternative would be to teach Tor Browser that https .onion addresses don't deserve a scary pop-up warning. A more thorough approach in that direction is to have a way for a hidden service to generate its own signed https cert using its onion private key, and teach Tor Browser how to verify them — basically a decentralized CA for .onion addresses, since they are self-authenticating anyway. Then you don't have to go through the nonsense of pretending to see if they could read email at the domain, and generally furthering the current CA model.

We could also imagine a pet name model where the user can tell her Tor Browser that this .onion address "is" Facebook. Or the more direct approach would be to ship a bookmark list of "known" hidden services in Tor Browser — like being our own CA, using the old-fashioned /etc/hosts model. That approach would raise the political question though of which sites we should endorse in this way.

So I haven't made up my mind yet about which direction I think this discussion should go. I'm sympathetic to "we've taught the users to check for https, so let's not confuse them", but I also worry about the slippery slope where getting a cert becomes a required step to having a reputable service. Let us know if you have other compelling arguments for or against.

Part five: what remains to be done?

In terms of both design and security, hidden services still need some love. We have plans for improved designs (see Tor proposal 224) but we don't have enough funding and developers to make it happen. We've been talking to some Facebook engineers this week about hidden service reliability and scalability, and we're excited that Facebook is thinking of putting development effort into helping improve hidden services.

And finally, speaking of teaching people about the security features of .onion sites, I wonder if "hidden services" is no longer the best phrase here. Originally we called them "location-hidden services", which was quickly shortened in practice to just "hidden services". But protecting the location of the service is just one of the security features you get. Maybe we should hold a contest to come up with a new name for these protected services? Even something like "onion services" might be better if it forces people to learn what it is.

Tor 0.2.5.7-rc is out

Tor 0.2.5.7-rc fixes several regressions from earlier in the 0.2.5.x release series, and some long-standing bugs related to ORPort reachability testing and failure to send CREATE cells. It is the first release candidate for the Tor 0.2.5.x series.

The tarball and signature file are currently available from
https://www.torproject.org/dist/
and packages and bundles will be available soon.

Changes in version 0.2.5.7-rc - 2014-09-11

  • Major bugfixes (client, startup):
    • Start making circuits as soon as DisabledNetwork is turned off.
      When Tor started with DisabledNetwork set, it would correctly
      conclude that it shouldn't build circuits, but it would mistakenly
      cache this conclusion, and continue believing it even when
      DisableNetwork is set to 0. Fixes the bug introduced by the fix
      for bug 11200; bugfix on 0.2.5.4-alpha.

    • Resume expanding abbreviations for command-line options. The fix
      for bug 4647 accidentally removed our hack from bug 586 that
      rewrote HashedControlPassword to __HashedControlSessionPassword
      when it appears on the commandline (which allowed the user to set
      her own HashedControlPassword in the torrc file while the
      controller generates a fresh session password for each run). Fixes
      bug 12948; bugfix on 0.2.5.1-alpha.

    • Warn about attempts to run hidden services and relays in the same
      process: that's probably not a good idea. Closes ticket 12908.
  • Major bugfixes (relay):
    • Avoid queuing or sending destroy cells for circuit ID zero when we
      fail to send a CREATE cell. Fixes bug 12848; bugfix on 0.0.8pre1.
      Found and fixed by "cypherpunks".

    • Fix ORPort reachability detection on relays running behind a
      proxy, by correctly updating the "local" mark on the controlling
      channel when changing the address of an or_connection_t after the
      handshake. Fixes bug 12160; bugfix on 0.2.4.4-alpha.
  • Minor features (bridge):
    • Add an ExtORPortCookieAuthFileGroupReadable option to make the
      cookie file for the ExtORPort g+r by default.
  • Minor features (geoip):
    • Update geoip and geoip6 to the August 7 2014 Maxmind GeoLite2
      Country database.
  • Minor bugfixes (logging):
    • Reduce the log severity of the "Pluggable transport proxy does not
      provide any needed transports and will not be launched." message,
      since Tor Browser includes several ClientTransportPlugin lines in
      its torrc-defaults file, leading every Tor Browser user who looks
      at her logs to see these notices and wonder if they're dangerous.
      Resolves bug 13124; bugfix on 0.2.5.3-alpha.

    • Downgrade "Unexpected onionskin length after decryption" warning
      to a protocol-warn, since there's nothing relay operators can do
      about a client that sends them a malformed create cell. Resolves
      bug 12996; bugfix on 0.0.6rc1.

    • Log more specific warnings when we get an ESTABLISH_RENDEZVOUS
      cell on a cannibalized or non-OR circuit. Resolves ticket 12997.

    • When logging information about an EXTEND2 or EXTENDED2 cell, log
      their names correctly. Fixes part of bug 12700; bugfix
      on 0.2.4.8-alpha.

    • When logging information about a relay cell whose command we don't
      recognize, log its command as an integer. Fixes part of bug 12700;
      bugfix on 0.2.1.10-alpha.

    • Escape all strings from the directory connection before logging
      them. Fixes bug 13071; bugfix on 0.1.1.15. Patch from "teor".
  • Minor bugfixes (controller):
    • Restore the functionality of CookieAuthFileGroupReadable. Fixes
      bug 12864; bugfix on 0.2.5.1-alpha.

    • Actually send TRANSPORT_LAUNCHED and HS_DESC events to
      controllers. Fixes bug 13085; bugfix on 0.2.5.1-alpha. Patch
      by "teor".
  • Minor bugfixes (compilation):
    • Fix compilation of test.h with MSVC. Patch from Gisle Vanem;
      bugfix on 0.2.5.5-alpha.

    • Make the nmake make files work again. Fixes bug 13081. Bugfix on
      0.2.5.1-alpha. Patch from "NewEraCracker".

    • In routerlist_assert_ok(), don't take the address of a
      routerinfo's cache_info member unless that routerinfo is non-NULL.
      Fixes bug 13096; bugfix on 0.1.1.9-alpha. Patch by "teor".

    • Fix a large number of false positive warnings from the clang
      analyzer static analysis tool. This should make real warnings
      easier for clang analyzer to find. Patch from "teor". Closes
      ticket 13036.
  • Distribution (systemd):
    • Verify configuration file via ExecStartPre in the systemd unit
      file. Patch from intrigeri; resolves ticket 12730.

    • Explicitly disable RunAsDaemon in the systemd unit file. Our
      current systemd unit uses "Type = simple", so systemd does not
      expect tor to fork. If the user has "RunAsDaemon 1" in their
      torrc, then things won't work as expected. This is e.g. the case
      on Debian (and derivatives), since there we pass "--defaults-torrc
      /usr/share/tor/tor-service-defaults-torrc" (that contains
      "RunAsDaemon 1") by default. Patch by intrigeri; resolves
      ticket 12731.
  • Documentation:
    • Adjust the URLs in the README to refer to the new locations of
      several documents on the website. Fixes bug 12830. Patch from
      Matt Pagan.

    • Document 'reject6' and 'accept6' ExitPolicy entries. Resolves
      ticket 12878.

A call to arms: Helping Internet services accept anonymous users

Looking for a way to help the Internet stay open and free? This topic needs some dedicated people to give it more attention — it could easily grow to as large a project as Tor itself. In the short term, OTF's Information Controls Fellowship Program has expressed interest in funding somebody to get this project going, and EFF's Eva Galperin has said she'd be happy to manage the person as an OTF fellow at EFF, with mentorship from Tor people. The first round of those proposals has a deadline in a few days, but if that timeframe doesn't work for you, this problem isn't going away: let us know and we can work with you to help you coordinate other funding.

The problem

We used to think there are two main ways that the Tor network can fail. First, legal or policy pressure can make it so nobody is willing to run a relay. Second, pressure on or from Internet Service Providers can reduce the number of places willing to host exit relays, which in turn squeezes down the anonymity that the network can provide. Both of these threats are hard to solve, but they are challenges that we've known about for a decade, and due in large part to strong ongoing collaborations we have a pretty good handle on them.

We missed a third threat to Tor's success: a growing number of websites treat users from anonymity services differently. Slashdot doesn't let you post comments over Tor, Wikipedia won't let you edit over Tor, and Google sometimes gives you a captcha when you try to search (depending on what other activity they've seen from that exit relay lately). Some sites like Yelp go further and refuse to even serve pages to Tor users.

The result is that the Internet as we know it is siloing. Each website operator works by itself to figure out how to handle anonymous users, and generally neither side is happy with the solution. The problem isn't limited to just Tor users, since these websites face basically the same issue with users from open proxies, users from AOL, users from Africa, etc.

Two recent trends make the problem more urgent. First, sites like Cloudflare, Akamai, and Disqus create bottlenecks where their components are used by many websites. This centralization impacts many websites at once when e.g. Cloudflare changes its strategy for how to handle Tor users. Second, services increasingly outsource their blacklisting, such that e.g. Skype refuses connections from IP addresses that run Tor exit relays, not because they worry about abuse via Tor (it's hard to use Skype over Tor), but because their blacklist provider has an incentive to be overbroad in its blocking. (Blacklist providers compete in part by having "the most complete" list, and in many cases it's hard for services to notice that they're losing contributions from now-missing users.)

Technical mechanisms do exist to let anonymous users interact with websites in ways that control abuse better. Simple technical approaches include "you can read but you can't post" or "you have to log in to post". More complex approaches track reputation of users and give them access to site features based on past behavior of the user rather than on past behavior of their network address. Several research designs suggest using anonymous credentials, where users anonymously receive a cryptographic credential and then prove to the website that they possess a credential that hasn't been blacklisted — without revealing their credential, so the website can't link them to their past behavior.

Social mechanisms have also proven effective in some cases, ranging from community moderation (I hear Wikipedia Germany manually approves edits from users who don't have sufficiently reputable accounts), to flagging behavior from Tor users (even though you don't know *which* Tor user it is) so other participants can choose how to interact with them.

But applying these approaches to real-world websites has not gone well overall. Part of the challenge is that the success stories are not well-publicized, so each website feels like it's dealing with the question in isolation. Some sites do in fact face quite different problems, which require different solutions: Wikipedia doesn't want jerks to change the content of pages, whereas Yelp doesn't want competitors to scrape all its pages. We can also imagine that some companies, like ones that get their revenue from targeted advertising, are fundamentally uninterested in allowing anonymous users at all.

A way forward

The solution I envision is to get a person who is both technical and good at activism to focus on this topic. Step one is to enumerate the set of websites and other Internet services that handle Tor connections differently from normal connections, and look for patterns that help us identify the common (centralized) services that impact many sites. At the same time, we should make a list of solutions — technical and social — that are in use today. There are a few community-led starts on the Tor wiki already, like the DontBlockMe page and a List of Services Blocking Tor.

Step two is to sort the problem websites based on how amenable they would be to our help. Armed with the toolkit of options we found in step one, we should go to the first (most promising) site on the list and work with them to understand their problem. Ideally we can adapt one of the ideas from the toolkit; otherwise we'll need to invent and develop a new approach tailored to their situation and needs. Then we should go to the second site on the list with our (now bigger) toolkit, and so on down the list. Once we have some success stories, we can consider how to scale better, such as holding a conference where we invite the five best success cases plus the next five unsolved sites on our list.

A lot of the work will be building and maintaining social connections with engineers at the services, to help them understand what's possible and to track regressions (for example, every year or so Google gets a new engineer in charge of deciding when to give out Captchas, and they seem to have no institutional memory of how the previous one decided to handle Tor users). It might be that the centralization of Cloudflare et al can be turned around into an advantage, where making sure they have a good practices will scale to help many websites at once.

EFF is the perfect organization to lead this charge, given its community connections, its campaigns like Who has your back?, and its more (at least more than Tor ;) neutral perspective on the topic. And now, when everybody is sympathetic about the topic of surveillance, is a great time to try to take back some ground. We have a wide variety of people who want to help, from scientists and research groups who would help with technical solutions if only they understood the real problems these sites face, to users and activists who can help publicize both the successful cases and the not-yet-successful cases.

Looking ahead to the future, I'm also part of an upcoming research collaboration with Dan Boneh, Andrea Forte, Rachel Greenstadt, Ryan Henry, Benjamin Mako Hill, and Dan Wallach who will look both at the technical side of the problem (building more useful ideas for the toolkit) and also the social side of the problem: how can we quantify the loss to Wikipedia, and to society at large, from turning away anonymous contributors? Wikipedians say "we have to blacklist all these IP addresses because of trolls" and "Wikipedia is rotting because nobody wants to edit it anymore" in the same breath, and we believe these points are related. The group is at the "applying for an NSF grant" stage currently, so it will be a year or more before funding appears, but I mention it because we should get somebody to get the ball rolling now, and hopefully we can expect reinforcements to appear as momentum builds.

In summary, if this call to arms catches your eye, your next steps are to think about what you most want to work on to get started, and how you would go about doing it. You can apply for an OTF fellowship, or we can probably help you find other funding sources as needed too.

Tor security advisory: "relay early" traffic confirmation attack

This advisory was posted on the tor-announce mailing list.

SUMMARY:

On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks.

The attacking relays joined the network on January 30 2014, and we removed them from the network on July 4. While we don't know when they started doing the attack, users who operated or accessed hidden services from early February through July 4 should assume they were affected.

Unfortunately, it's still unclear what "affected" includes. We know the attack looked for users who fetched hidden service descriptors, but the attackers likely were not able to see any application-level traffic (e.g. what pages were loaded or even whether users visited the hidden service they looked up). The attack probably also tried to learn who published hidden service descriptors, which would allow the attackers to learn the location of that hidden service. In theory the attack could also be used to link users to their destinations on normal Tor circuits too, but we found no evidence that the attackers operated any exit relays, making this attack less likely. And finally, we don't know how much data the attackers kept, and due to the way the attack was deployed (more details below), their protocol header modifications might have aided other attackers in deanonymizing users too.

Relays should upgrade to a recent Tor release (0.2.4.23 or 0.2.5.6-alpha), to close the particular protocol vulnerability the attackers used — but remember that preventing traffic confirmation in general remains an open research problem. Clients that upgrade (once new Tor Browser releases are ready) will take another step towards limiting the number of entry guards that are in a position to see their traffic, thus reducing the damage from future attacks like this one. Hidden service operators should consider changing the location of their hidden service.

THE TECHNICAL DETAILS:

We believe they used a combination of two classes of attacks: a traffic confirmation attack and a Sybil attack.

A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:
https://blog.torproject.org/blog/one-cell-enough

The particular confirmation attack they used was an active attack where the relay on one end injects a signal into the Tor protocol headers, and then the relay on the other end reads the signal. These attacking relays were stable enough to get the HSDir ("suitable for hidden service directory") and Guard ("suitable for being an entry guard") consensus flags. Then they injected the signal whenever they were used as a hidden service directory, and looked for an injected signal whenever they were used as an entry guard.

The way they injected the signal was by sending sequences of "relay" vs "relay early" commands down the circuit, to encode the message they want to send. For background, Tor has two types of cells: link cells, which are intended for the adjacent relay in the circuit, and relay cells, which are passed to the other end of the circuit. In 2008 we added a new kind of relay cell, called a "relay early" cell, which is used to prevent people from building very long paths in the Tor network. (Very long paths can be used to induce congestion and aid in breaking anonymity). But the fix for infinite-length paths introduced a problem with accessing hidden services, and one of the side effects of our fix for bug 1038 was that while we limit the number of outbound (away from the client) "relay early" cells on a circuit, we don't limit the number of inbound (towards the client) relay early cells.

So in summary, when Tor clients contacted an attacking relay in its role as a Hidden Service Directory to publish or retrieve a hidden service descriptor (steps 2 and 3 on the hidden service protocol diagrams), that relay would send the hidden service name (encoded as a pattern of relay and relay-early cells) back down the circuit. Other attacking relays, when they get chosen for the first hop of a circuit, would look for inbound relay-early cells (since nobody else sends them) and would thus learn which clients requested information about a hidden service.

There are three important points about this attack:

A) The attacker encoded the name of the hidden service in the injected signal (as opposed to, say, sending a random number and keeping a local list mapping random number to hidden service name). The encoded signal is encrypted as it is sent over the TLS channel between relays. However, this signal would be easy to read and interpret by anybody who runs a relay and receives the encoded traffic. And we might also worry about a global adversary (e.g. a large intelligence agency) that records Internet traffic at the entry guards and then tries to break Tor's link encryption. The way this attack was performed weakens Tor's anonymity against these other potential attackers too — either while it was happening or after the fact if they have traffic logs. So if the attack was a research project (i.e. not intentionally malicious), it was deployed in an irresponsible way because it puts users at risk indefinitely into the future.

(This concern is in addition to the general issue that it's probably unwise from a legal perspective for researchers to attack real users by modifying their traffic on one end and wiretapping it on the other. Tools like Shadow are great for testing Tor research ideas out in the lab.)

B) This protocol header signal injection attack is actually pretty neat from a research perspective, in that it's a bit different from previous tagging attacks which targeted the application-level payload. Previous tagging attacks modified the payload at the entry guard, and then looked for a modified payload at the exit relay (which can see the decrypted payload). Those attacks don't work in the other direction (from the exit relay back towards the client), because the payload is still encrypted at the entry guard. But because this new approach modifies ("tags") the cell headers rather than the payload, every relay in the path can see the tag.

C) We should remind readers that while this particular variant of the traffic confirmation attack allows high-confidence and efficient correlation, the general class of passive (statistical) traffic confirmation attacks remains unsolved and would likely have worked just fine here. So the good news is traffic confirmation attacks aren't new or surprising, but the bad news is that they still work. See https://blog.torproject.org/blog/one-cell-enough for more discussion.

Then the second class of attack they used, in conjunction with their traffic confirmation attack, was a standard Sybil attack — they signed up around 115 fast non-exit relays, all running on 50.7.0.0/16 or 204.45.0.0/16. Together these relays summed to about 6.4% of the Guard capacity in the network. Then, in part because of our current guard rotation parameters, these relays became entry guards for a significant chunk of users over their five months of operation.

We actually noticed these relays when they joined the network, since the DocTor scanner reported them. We considered the set of new relays at the time, and made a decision that it wasn't that large a fraction of the network. It's clear there's room for improvement in terms of how to let the Tor network grow while also ensuring we maintain social connections with the operators of all large groups of relays. (In general having a widely diverse set of relay locations and relay operators, yet not allowing any bad relays in, seems like a hard problem; on the other hand our detection scripts did notice them in this case, so there's hope for a better solution here.)

In response, we've taken the following short-term steps:

1) Removed the attacking relays from the network.

2) Put out a software update for relays to prevent "relay early" cells from being used this way.

3) Put out a software update that will (once enough clients have upgraded) let us tell clients to move to using one entry guard rather than three, to reduce exposure to relays over time.

4) Clients can tell whether they've received a relay or relay-cell. For expert users, the new Tor version warns you in your logs if a relay on your path injects any relay-early cells: look for the phrase "Received an inbound RELAY_EARLY cell".

The following longer-term research areas remain:

5) Further growing the Tor network and diversity of relay operators, which will reduce the impact from an adversary of a given size.

6) Exploring better mechanisms, e.g. social connections, to limit the impact from a malicious set of relays. We've also formed a group to pay more attention to suspicious relays in the network:
https://blog.torproject.org/blog/how-report-bad-relays

7) Further reducing exposure to guards over time, perhaps by extending the guard rotation lifetime:
https://blog.torproject.org/blog/lifecycle-of-a-new-relay
https://blog.torproject.org/blog/improving-tors-anonymity-changing-guard...

8) Better understanding statistical traffic correlation attacks and whether padding or other approaches can mitigate them.

9) Improving the hidden service design, including making it harder for relays serving as hidden service directory points to learn what hidden service address they're handling:
https://blog.torproject.org/blog/hidden-services-need-some-love

OPEN QUESTIONS:

Q1) Was this the Black Hat 2014 talk that got canceled recently?
Q2) Did we find all the malicious relays?
Q3) Did the malicious relays inject the signal at any points besides the HSDir position?
Q4) What data did the attackers keep, and are they going to destroy it? How have they protected the data (if any) while storing it?

Great questions. We spent several months trying to extract information from the researchers who were going to give the Black Hat talk, and eventually we did get some hints from them about how "relay early" cells could be used for traffic confirmation attacks, which is how we started looking for the attacks in the wild. They haven't answered our emails lately, so we don't know for sure, but it seems likely that the answer to Q1 is "yes". In fact, we hope they *were* the ones doing the attacks, since otherwise it means somebody else was. We don't yet know the answers to Q2, Q3, or Q4.

Syndicate content Syndicate content