Onion Service version 2 deprecation timeline

From today (July 2nd, 2020), the Internet has around 16 months to migrate from onion services v2 to v3 once and for all.
 
Nostalgia
 
More than 15 years ago, Onion Service (at the time named Hidden Service) saw the light of day. It was initially an experiment in order to learn more on what the Tor Network could offer. The protocol reached its version 2 soon after deployment.
 
Over the years, onion services evolved and version 2 developed into a strong stable product that has been used for over a decade now. During all those years, onion service adoption increased drastically. From the .onion tld being standarized by ICANN, to SSL certificates being issued to .onion addresses. Onion services these days support a whole ecosystem of client applications: from web browsing to file sharing and private messaging.
 
As humankind's understanding of math and cryptography evolved, the foundation of version 2 became fragile and at this point in time, unsafe. If you want to read more about the technical problems that version 2 faces, please read this post and don't hesitate to ask questions if any.
 
Which lead us to 2015: a large scale development effort spanning over 3 years resulted in version 3. On January 9th 2018, Tor version 0.3.2.9 was released which was the first tor supporting onion service version 3. And I bet you've encountered them, they have 56 characters and end in .onion ;).
 
Every single relay on the Tor Network now supports version 3. It is also today's default version when creating an onion service.
 
With onions v3 standing strong, we are at a good position to retire version 2: Version 2 has completed its course. Run its circle it has provided security and privacy to countless people around the world. But more importantly, it has created and propulsed a new era of private and secure communication.
 
Retirement
 
Here is our planned deprecation timeline:
 
1. September 15th, 2020
0.4.4.x: Tor will start warning onion service operators and clients that v2 is deprecated and will be obsolete in version 0.4.6.
 
2. July 15th, 2021
0.4.6.x: Tor will no longer support v2 and support will be removed from the code base.
 
3. October 15th, 2021
We will release new Tor client stable versions for all supported series that will disable v2.
 
This effectively means that from today (July 2nd, 2020), the Internet has around 16 months to migrate from v2 to v3 once and for all.
 
We'll probably run into some difficulties here; no matter how prepared we think we are, we find that there are always more surprises. Nonetheless, we'll do our best to fix problems as they come up, and try to make this process as smooth as possible. 
 
Transition from v2 to v3
 
This section details how to setup a v3 service from your existing v2 service. Unfortunately, there is no mechanism to cross-certify the two addresses.
 
In torrc, to create a version 3 address, you simply need to add these two lines. The default version is now set to 3 so you don't need to explicitly set it.
 
   HiddenServiceDir /full/path/to/your/hs/v3/directory/
   HiddenServicePort <virtual port> <target-address>:<target-port>
 
Finally, if you wish to keep running your version 2 service until it is deprecated to provide a transition path to your users, add this line to the configuration block of your version 2 service:
 
   HiddenServiceVersion 2
 
This will allow you to identify in your configuration file which one is which version.
 
Good Luck with the migration. Let us know if you have any problems in the comments below or post on the tor-talk mailing list.
Anonymous

July 02, 2020

Permalink

Does this mean the end of memorable onion addresses? It was a good run, but it's time for better privacy and security.

R.I.P.

Can you please post the correct v3 addresses for all Tor Project onions?

Even better, for The Intercept, The Guardian, Buzzfeed, etc?

Would it be a bad idea for users to keep v3 onion addresses on a dedicated LUKS encrypted USB and exclusively accessing them via latest Tails booted from a R/O DVD?

What is "best practice" for verifying detached signatures of Tails ISO images and Tor Browser images going forward, given the keyring-bombing problem?

TIA

> What is "best practice" for verifying detached signatures of Tails ISO images and Tor Browser images going forward, given the keyring-bombing problem?

Short answer:

If you're really worried, download keys only from the project website and not from a keyserver. https://tails.boum.org/doc/about/openpgp_keys/index.en.html

Or, just make backups of your ~/.gnupg directory so you can restore it if anything happens.

Long answer:

I assume you're taking about CVE-2019-13050, aka certificate spamming (or signature spamming, or whatever you prefer to call it). First of all, this is only relevant when importing a new key or refreshing an existing key. Ideally you should only have to import the key once, store it preferably on a read only media such as a CD or a paper QR code, and keep it forever. At the very least, you should do this with the key fingerprint. You can completely avoid certificate spamming attacks by downloading the key from the project's website instead of a keyserver Of course, you must still use the same key authentication practices no matter where you get the key from. Here's a short summary of the vulnerability: https://access.redhat.com/articles/4264021

Secondly, I believe most or all of the keyserver implementations have patched against this. Spamming attacks take up a lot of space and bandwidth on the keyservers themselves. In response, many servers have begun discarding signatures that are unknown, or those that don't link to an existing web of trust. As one example, keys.openpgp.org decided to give up and just stop accepting signatures altogether. Anecdotally, I've had "auto-retrieve-key" enabled (when verifying an signature, gpg will automatically download it from the keyserver if it's not in the keyring) for the past six months or so, and have imported probably 100 keys this way without issue. Occasionally it will take a few minutes import certain keys, but I'm fine with that. Here's a good article about the history of keyservers and attacks against them; I don't necessarily agree with the author's opinions, but just to give you some background: https://gist.github.com/rjhansen/f716c3ff4a7068b50f2d8896e54e4b7e

Third, it's a well-known vuln and every current version is patched. Even so, at worst it's a temporary denial of service attack. You should be keeping regular backups of your public keyring in the first place, as it can become corrupt for any number of reasons ranging from power failures to ransomware. Also note that, from what I understand, GPG automatically keeps a backup of the keyring, which is updated after every *successful* operation. If your keyring gets corrupted during an import, you should be able to replace pubring.gpg with pubring.gpg~ and be back where you were before.

Finally, watch out for anti-GPG FUD, which has become popular in the past few years. Many of the criticisms are not wrong, but they're often unfair and over-hyped. I don't know why there's become such a vendetta against GPG the past few years. Sure, it's showing its age, and it has its problems, but so does everything else. Personally, I think it's because people are annoyed by GPG because it's so inconvenient to use, so they magnify every little flaw hoping the world will switch to something more user-friendly and therefore centralized.

Thank you so much for your reply! This is a good example of the kind of useful basic information I'd like to see more of here, as opposed to posts which sometimes appear intended mostly to keep grantors happy (although those posts also have value).

Indeed, I was asking about CVE-2019-13050 (key-bombing or sig-bombing).

I have been downloading keys from multiple sources and checking full fingerprints, subkey expiration dates, etc, for many years (I see that trick is also mentioned in another reply below). That is, I obtain what should be the correct key in two or more ways (using gpg itself from two keyservers, and https via Tor Browser) from various sites, giving them different file names, then import them into GPG running in a temporary instance. GPG running in Tails should kick if they do not appear to be the same key (often some versions show sigs others lack, of course, which is expected). I also store multiple copies of what should be the correct key in multiple well encrypted places (USBs, encrypted directories on hard drives, etc) and periodically check them against each other. The hope is that even an attacker who gains illicit access to one or more of my devices will not be able to anticipate all the things I do and that (hopefully) if the worst happens I will notice something appears to be wrong.

You make a very good point about a GPG key downloaded directly from a trusted website such as www.torproject.org being free of spammed sigs, a point which had not occurred to me. Previously I had given more weight to uncertainty that a bad actor might have replaced the genuine key without being noticed (I certainly hope TP has regular checks against that), or that despite https and certs my Tor Browser might have somehow been tricked into using an imposter site. Thus the fallback upon trying various ways to try to catch a hypothetical impostor key in a contradiction.

> Secondly, I believe most or all of the keyserver implementations have patched against this. Spamming attacks take up a lot of space and bandwidth on the keyservers themselves. In response, many servers have begun discarding signatures that are unknown, or those that don't link to an existing web of trust.

Good to know; thank you.

> As one example, keys.openpgp.org decided to give up and just stop accepting signatures altogether.

That is one of the keyservers I use for some of my attempts to catch hypothetical imposter keys. (Well, not entirely hypothetical since we know bad actors or jokers have in fact submitted fake keys.) Has anyone else noticed that pgp.mit.edu has become less useful over time?

The web of trust has problems but I think it retains value. I think signing keys with keys which have been known for a long time and which themselves are signed is valuable. Before key bombing became such an issue I did extensive sig checking at least once a year, for critical signing keys.

> Third, it's a well-known vuln and every current version is patched.

Very good to know. This is a good example of the kind of very important very useful and morale-improving information which I would like to see TP share, perhaps in a regular Fri afternoon "Have you heard?" post in which Tor users are allowed to raise questions they would like to see answered in the comment section.

> Even so, at worst it's a temporary denial of service attack. You should be keeping regular backups of your public keyring in the first place, as it can become corrupt for any number of reasons ranging from power failures to ransomware

Right, I am doing that but will do it more.

I have lost at least two computers to power failures (third world power distribution although things are getting a bit more reliable). A related source of keyring corruption or loss is that equipment dies for mysterious reasons, often several at the same time, which seems suspicious.

The laptop I am using right now seems to work fine using Tails 4.8 but unfortunately has a very nasty habit of freezing up every hour or so under Debian 10.4 with MATE. One of my guesses is that an unfixed bug in Wayland (when a device possibly lacks drivers/firmware written for the exact same make and model device) is somehow involved, leading to concern that when Tails adopts Wayland in future versions it may suddenly become unreliable.

> Also note that, from what I understand, GPG automatically keeps a backup of the keyring, which is updated after every *successful* operation. If your keyring gets corrupted during an import, you should be able to replace pubring.gpg with pubring.gpg~ and be back where you were before.

Yes, I have never tested that but I believe that is true. Obviously won't help guard against malicious deletion but as you say other sources of data loss are probably more likely for most of us.

These points would all be good to add to the Tor Browser documention, ideally in a seperate page, perhaps called something like "Useful GPG Tips and Tricks".

I agree with you that GPG FUD is unwarranted, dangerous, and possibly in some cases coming from bad actors who hate strong encryption. As you know, for many years, GPG was maintained by only one person upon whom the entire world depended, and eventually began to break, but I understand more people finally joined the team, the code was rewritten and much improved (for one thing, GPG became much faster!). It's over-featured but invaluable if only because its used to verify ISO images and tarballs for everything else. I also still use it to encrypt files (both data at rest and data in motion). It is possible to use short shell scripts to avoid typing multiple flags in GPG commands.

Incidently, existing elliptic curve crypto tools in the Debian repository are still more at the level of ill conceived and hard to use school programming projects. We need tools written in the true unix spirit (do one thing but do it very well), in particular which play nice with the pipe.

I am trying to save all the links but I often cannot find them when I need them in a high stress situation, so it would be nice if TP had some useful concise FAQ pages bookmarked in Tor Browser itself. Of course, all TP pages should have a date ("last modified") because people might stumble over them via a search engine 10 years from now, by which point the information may be dangerous out of date.

As far as I know, all Tor Project onions are v2 at the moment. See: https://onion.torproject.org/ Open an issue on https://gitlab.torproject.org/ to request v3 onions. Which repository should it be posted under? Don't know.

Onion addresses of other websites cannot be authenticated by Tor Project but only by those websites. From the perspective of those hosts and their visitors, Tor Project is a third party and cannot provide proof of their addresses. See: https://trac.torproject.org/projects/tor/wiki/org/projects/WeSupportTor

Starting from Tor Browser 9.5, the host itself can hint to the browser that it has an onion address by adding the HTTP header, "Onion-Location". The browser can then automatically redirect you from the "clearnet" ("surface web") hostname to its onion. You can view the header in Tor Browser (Firefox) by opening the Main Menu --> Web Developer tools --> Network tab. See: https://stackoverflow.com/questions/33974595/in-firefox-how-do-i-see-ht…

LUKS encryption applies to all files you save in it no matter if they are related to Tor or not. You may be ordered or coerced to decrypt data or give the passphrase by court order, legal warrant, legalese, social engineering, threat of harm, or other situations, particularly at national borders and airports.
- https://www.theverge.com/2017/2/12/14583124/nasa-sidd-bikkannavar-detai…
- https://www.aclu.org/blog/free-future/can-border-agents-search-your-ele…
- https://ssd.eff.org/en/module/things-consider-when-crossing-us-border

The "keyring-bombing problem," as you call it, is circumvented by the updated instructions on Tor Project's Support site that specify the wkd (Web Key Directory) key-locate mechanism. Developers of Tails instruct to download its key from Tails' website.
- https://support.torproject.org/tbb/how-to-verify-signature/
- https://wiki.gnupg.org/WKD
- https://tails.boum.org/install/download/

As always, the 40-character fingerprints of the keys can be viewed in gpg and compared to online sources. In general, it may be possible to corroborate a key by downloading copies of it from different sources at different times through different exit nodes or public IP addresses and import the copies one by one to be sure they all have the same fingerprint. See sks-keyservers. Verify keys on a machine you trust, and locally sign the key ("lsign" in gpg) to mark it in your keyring as authenticated. You can export your entire keyring or specific keys from gpg to save in your LUKS encrypted volume and import them into Tails' gpg anytime. Optionally consider gpg's "export-clean" or "export-minimal".

> Can you please post the correct v3 addresses for all Tor Project onions?

The Tor Project will very very soon have most of its services on v3 addresses. The missing up to date packages for Onion Balance (with v3 support) in Debian have been uploaded and we'll soon deploy it.

Well, some were. It was feasible to memorize 16-character addresses, and it was relatively easy to verify addresses like FacebookCoreWWWI.onion. Those names made onion sites more navigable. I don't know what the solution will be now. Having a list of bookmarks or URLs would be personally identifiable and revealing.

Anonymous

July 02, 2020

Permalink

Hello! OffTopic:

There is a discussion with OpenNet-site owner to provide onion-mirror of the site - https://www.opennet.ru/openforum/vsluhforumID4/591.html
It looks like OpenNet is already quite friendly to Tor-users, unfortunately owner is quite conservative and requests the reasons to run onion-service. Have to say I share some his concerns and thus I did a brief web-search:

https://community.torproject.org/onion-services/
--
What are Onion Services?
Onion services are services that can only be accessed over Tor.
Running an onion service gives your users all the security of HTTPS
with the added privacy benefits of Tor Browser.
--

https://riseup.net/en/security/network-security/tor/onionservices-best-…
--
Onion services don’t need to be hidden!
You can provide a onion service for a service that you offer publically on a server that is not intended to be hidden.
Onion services are useful to protect users from passive network surveillance,
they keep the snoopers from knowing where users are connecting from and to.

--
Ask your favorite online service to provide an onion service!
Advocate for more onion services by asking those who provide the services that you use to make them available.
They are easy to setup and maintain, and there is no reason not to provide them!
--

Summarizing the above - onion-version of OpenNet may
* bring some (what?) "added privacy benefits" to users.
* "keep [the evils] from knowing where users are connecting from"
- Could you please say - What did I miss?

Concerning RiseUp's "there is no reason not to provide them" - the site-owner argues that extra functionality potentially increase number of vectors for attacks. Thus "there is no reason not to" - is not a reason to do :-)

Could anybody provide more arguments for adding onion-service?

Personally I see the reason to keep a browsing within TorNetwork
* to avoid ClearNet DNS-requesrs and
* (probably) avoid pumping Web-traffic via any of TorExitNodes (as ones are potentially more risky?).
- so this is all about "to protect users", are there any other reasons?
Dos it mean that all the onion-stuff is about - "to protect users"? Does onion-version protects users sufficiently better than Tor+HTTPS?

Also unfortunately https://www.eff.org/pages/tor-and-https does not illustrates the situation with onion-services.
So -
* What are benefits of visiting site-onion-version over Tor+HTTPS for users?
* Are there pitfalls of keeping site-onion-version for site maintainers?

Also https://community.torproject.org/onion-services/overview/
provides some descriptions -
* "Location hiding" - it is not hidden site, seems like unnecessary for this situation
* "NAT punching" - not sure about, seems like unnecessary for this situation
* "End-to-end authentication" - i.e. about avoiding DNS-attacks and MITMs
* "End-to-end encryption" - i.e. strong crypto

Are there any advocates to help to point to extra reasons to prepare onion-service of opennet.ru?

You hit almost everything I can think of. I only had one thing to add: it reduces the load on the Tor exit nodes. It can also be faster especially when the network is under load, because your Tor circuit is not limited to a (comparatively short) list of exit nodes for hop 3. Besides that, there's no direct benefit to site operators or users, but indirectly it helps make the Tor network more sustainable.

> potentially increase number of vectors for attacks

Assuming a cloud or co-lo situation, you can set up a reverse proxy with a .onion address, running on a totally separate server under a separate account, but preferably in the same datacenter(s) as the clearnet server. This way, .onion traffic is terminated inside the datacenter's LAN, but the onion service presents no greater risk than any other co-located services or end-users, and it requires no modification to the main server itself. It would be at least as secure as the current setup, but better if you pin the site certificate in the reverse proxy to prevent PKI attacks.

Anonymous

July 02, 2020

Permalink

V2 is still working good please don’t shutdown v2 ..we really love v2. It’s easy to remember and all

Anonymous

July 03, 2020

Permalink

This is desperately needed. Even a lot of brand new sites are choosing v2, I presume for aesthetic and convenience reasons at the expense of security. And when given the choice, a lot of people seem to use the v2 address for posting links or visiting sites.

> Unfortunately, there is no mechanism to cross-certify the two addresses.
I'm not sure what this means. Isn't posting a v3 link on your v2 site, or a "Moved Permanently" redirect, cross-certification enough?

It would certify the v2->v3 but certifying that the v3 is linked to the v2 address is a bit harder if you end up never visiting the v2 address.

For instance, SSL certificate could be used for that but then another set of trust problems arise as in trusting CAs and so on.

The v3 has associated keys that can be used to sign and encrypt a message containing the v2 address and the result can be included as part of the service descriptor and decrypted and verified by the client. CAs are not involved.

Anonymous

July 03, 2020

Permalink

Could the V3 protocol use the shorter addresses? Or is that not technically possible? Even if better cryptography and hashing are used, then why can't onion services still just use the last 16 characters of their new fingerprints?

It is unfortunately not doable. The reason that v3 addresses are so long is because they are literally the full cryptographic public key (ed25519 key).

In the long run, we (Tor community) have to come up with a "memorable address" system for those. We've released recently a new Tor Browser with an experiment in that field by using HTTPS-Everywhere plugin to also have channels that maps a ".tor.onion" address to a long ".onion" address.

See Tor Browser 9.5 release post.

This is just a start. It is a hard problem and this space needs more research/help.

Or rather it would need a protocol revision for that.

Any memorable address system where you choose the name directly and it's centrally registered is a fair bit worse than even v1's directory system, before v2.

FWIW I (not the OP) for one can live with the unmemorable v3 addresses for the sake of better security which we all know is absolutely needed.

If you mostly use Tails, one possible approach is to buy a "lanyard" and wear a dedicated LUKS encrypted USB you use as your portable protected list of onion bookmarks.

The need to memorize many passphrases is another apparently unvoidable problem.

Modern technology certainly discriminates on the basis of wetware memory. Note to self: buy that alleged memory enhancing dietary supplement called, uhm... what is it called?

Anonymous

July 04, 2020

Permalink

+1 v2 needs to go

Has anyone notified Facebook et al who are yet to switch to v3?

Anonymous

July 04, 2020

Permalink

Eh, my thoughts feel very conflicted on this. I get that v2 addresses are becoming less secure and all as time moves on, but...I don't really think that should be enough to force-block them from the network after a certain update. Does this mean v2 addresses will be impossible for me to access (assuming updated TBB and network) from July 15th, 2021? If so, maybe just preventing *new* v2 onion services from being made instead should be considered.

The tor 0.4.6 release (July 15th 2021) will prevent new v2 to be created. Once Tor Browser releases with that tor version, yes you also won't be able to reach v2 address as a tor client.

By October of that year, we plan to release new tor maintained stable version that remove v2 support meaning that from that point on, progressively the network will upgrade and v2 address won't be reachable.

Anonymous

July 05, 2020

Permalink

You are entirely unconvincing and user-hostile. You fundamentally misunderstand the nature of Open Source Software, and the rights that users own. It is for users to decide what we want.

I think "user-hostile" is unfair.

Cybersecurity defenders constantly confront difficult choices in which we hope to gain something valuable (e.g. better security) at the expense of weakening something else (e.g. convenience). In general I think users need to try to maintain their trust that Tor Project is mostly making well-informed and generally good choices.

I think "unconvincing" is also unfair.

The safety of onions is increasingly critical to everyone alive, but we know that v2 onions are vulnerable to attack. The consequences can be so dangerous that it is good public policy, not only to encourage people to use onions, but to encourage them to use v3 onions.

The Guardian just published an AP report on an incident which contains an interesting snippet:

theguardian.com
Dutch arrests after discovery of 'torture chamber' in sea containers
Police raids offer chilling insight into increasingly violent criminal underworld
Associated Press in The Hague
7 Jul 2020

> On 22 June, Dutch national police force officers arrested six men on suspicion of crimes including preparing kidnappings and serious assault. Detectives also discovered the seven converted sea containers in a warehouse in Wouwse Plantage, a village in the south-west of the country, close to the border with Belgium. They were tipped off by messages from an EncroChat phone that included photos of the containers and dentist’s chair with belts attached to the arm and foot supports. The messages called the warehouse the “treatment room” and the “ebi”, a reference to a top security Dutch prison. The messages also revealed identities of potential victims, who were warned and went into hiding, police said.

I had never heard of EncroChat until the recent spate of stories (mostly likely being pushed by the US FBI, which in recent years has been devoting enormous resources to a dangerous if comically inaccurate PR rampage called "Going Dark"). It worries me that none of the reporters writing these stories seem to question the unstated hints (presumably from FBI) that EncroChat's "sole purpose" [sic] was enabling the worst kinds of criminal activity. None of these reporters ever bothers to remind readers that human rights defenders, victims of domestic violence, reporters, lawyers, doctors, and even social service agencies all have very good reason to use strongly encrypted messaging for legitimate and badly needed activities.

Another discouraging warning sign: while Drump now appears likely to decisively lose the 2020 US presidential election (which is in itself good for human rights all over the world), his opponent, Joe Biden, and his likely running mate, Kamala Harris, have lengthy histories of pursuing exactly the kind of destructive terrible laws BLM and cryptopunks have been fighting against. (Biden, aka "the Senator from Offshore", is the catspaw of the worst elements of Big Finance; he was a cosponsor of both the awful Omnibus Crime bill, which became law, and the awful Clipper chip bill, which failed; Harris is a former "tough on crime" prosecutor.) Thus, no matter whether the Drump or Biden ticket wins the upcoming US presidential election, US civilian cryptography proponents and other privacy advocates are likely to face even more hostility in the years to come.

(Just to be clear: I am another Tor user, not a Tor Project employee.)

Anonymous

July 05, 2020

Permalink

What's with the paternalistic statements along the lines of "it's not secure enough for you"? Do you believe yourself to be telepathic? How else would you know what users (the people Tor exists for) consider to be "secure enough", or what we need from a system? You obviously aren't telepathic, because with all things considered in depth, v2 is fine for me and many others, but v3 in its current form is not.

And what's with the circular argument of "it's being deprecated because it's not being maintained"? No one is forcing you to maintain it. It is stable enough, we know exactly what we are doing, it's what we want to use.

V3 does not have "feature parity with v2".

With excessively long identifiers, and no way to keep old identifiers in the new system, v3 is not good enough. Centralized naming schemes are completely unacceptable and no solution.

I don't see any reasonable justification for *not* retiring v2, and it seems to me that the post above outlines a reasonable plan for replacing v2 with v3 over the next few years. The post does say that TP is aware that unanticipated problems may arise and that TP plans to be flexible in response.

Some of the objections may be based upon misreading the official post. Although there are useful clarifications in the comments, it is always best to avoid misunderstanding to the extent possible (to be sure, given the technical complexity of the subjects often discussed in this blog, some misunderstandings are probably unavoidable).

Can I suggest that TP try to implement a small Writing Guide? The writing guide might stipulate that the the general structure of offical blog posts making important announcements should be:

o one sentence or bullet point summary
o introduction/background
o details
o useful digressions (if any)
o conclusion

The Guide could suggest that authors start with the details, then write the introduction and conclusion, then read what they have just written, trying to anticipate misunderstandings or questions, then rewrite to clarify or address, and lastly and very carefully write the summary.

In this case, to prevent panic the post perhaps could have started

"Tor Project wants to migrate onion sites to v3. Old v2 onions will still work until 2021, but over the next year, new onions must use v3. The main effect on users is that onion addresses will tend to be much longer, but we have some suggestions for how users can deal with that."

The timeline is actually great and contains much the same information but I see value in saying something important twice in two different ways.

https://www.onioncat.org/

OnionCat is the justification to NOT remove v2 support.
Do you know what OnionCat even is?
Have you configured and used it with applications in onionspace?
Some are even listed on this page.

So don't claim v2 is unuseful to everyone and every app until you know why they HAVE to use v2.
In some VERY GOOD USE CASES, v2+onioncat is the ONLY way to get apps working in onionspace.

You have to realize, we're a minority in this community. Most people don't even realize how insecure v2 is, they just see a URL and think "ooh this one's shorter I like this one." Onion operators continue to run v2 because they get more hits from it. HTTPS has a similar problem, arguably worse, because the site could be using badly broken ciphers under the hood, but the browser still shows you the lock icon and most people think it's just as good as the next HTTPS site, if they even bother to check for the S at all. That's why OpenSSL decided to deprecate TLS 1.0 and 1.1.

If you really do know what you're doing, you're welcome to download the Tor source code and add v2 functionality back in, and you'll be able to visit sites hosted by people who have done the same. No one is stopping you. If your favorite sites no longer host a v2 (with a patched Tor), then you'll have to convince them to do so. If you can't convince them, then, well, maybe v2 isn't really so important after all.

I'm sorry you feel this way about the deprecation of v2, but sometimes we have to sacrifice the one to save the many.