When does safe and productive use of cryptography cross over to cryptophilia?

Encryption makes the Internet possible, but there are some controversial and other downright stupid uses for which we all pay.

Imagine someone creating or supporting a technology that consumes vast amounts of energy only to produce nothing of intrinsic value and being proud of that of that fact. Such is the mentality of Bitcoin supporters. As the Financial Times reported several days ago, Bitcoin mining, the process by which this electronic fools’ gold is “discovered”, takes up as much power as a small country. And for what?

Cambridge University Bitcoin Electricity Consumption Index shows that bitcoin mining consumes more energy than entire countries
Cambridge University Bitcoin Electricity Consumption Index

The euro, yen, and dollar are all tied to the fortunes and monetary policies of societies as represented by various governments. Those currencies are all governed by rules of their societies. Bitcoin is an attempt to strip away those controls. Some simply see cryptocurrencies as a means to disrupt the existing banking system, in order to nab a bit of the financial sector’s revenue. If so, right now they’re not succeeding.

In fact nothing about cryptocurrency is succeeding, while people waste a tremendous amount of resources. Bitcoin has been an empty speculative commodity and a vehicle for criminals to receive ransoms and other fees, as happened recently when the Colonial Pipeline paid a massive $4.4 million to DarkSide, a gang of cyber criminals.

What makes this currency attractive to hackers is that otherwise intelligent people purchase and promote the pseudo-currency. Elon Musk’s abrupt entrance and exit (that some might call Pump and Dump), demonstrates how fleeting that value may be.

Bitcoin is nothing more than an expression of what some would call crypto-governance, a belief that somehow technology is above it all and somehow is its own intrinsic benefit to some vague society. I call it cryptophilia: an unnatural and irrational love of all things cryptography, in an attempt to defend against some government, somewhere.

Cryptography As a Societal Benefit

Let’s be clear: Without encryption there could be no Internet. That’s because it would simply be too easy for criminals to steal information. And as is discussed below, we have no shortage of criminals. Today, thanks to efforts by people like letencrypt.org, the majority of traffic on the Internet is encrypted, and by and large this is a good thing.

This journey took decades, and it is by no means complete.

Some see encryption as a means by those in societies who lack basic freedoms as a means to express themselves. The argument goes that in free societies, governments are not meant to police our speech or our associations, and so they should have no problem with the fact that we choose to do so out of their ear shot, the implication being that governments themselves are the greatest threat to people.

Distilling Harm and Benefit

Bitcoin is an egregious example of how this can go very wrong. A more complicated case to study is the Tor network, which obscures endpoints through a mechanism known as onion routing. The proponents of Tor claim that it protects privacy and enables human rights. Critics find that Tor is used for illicit activity. Both may be right.

Back in 2016, Matthew Prince, the CEO of Cloudflare reported that, “Based on data across the CloudFlare network, 94% of requests that we see across the Tor network are per se malicious.” He went on to highly that a large portion of spam originated in some way from the Tor network.

One recent study by Eric Jardine and colleagues has shown that some 6.7% of all ToR requests are likely malicious activity. The study also asserts that so-called “free” countries are bearing the brunt of the cost of Tor, both in terms of infrastructure and crime. The Center for Strategic Studies quantifies the cost at $945 billion, annually, with the losses having accelerated by 50% over two years. The Tor network is key enabling technology for the criminals who are driving those costs, as the Colonial Pipeline attack so dramatically demonstrated.

Visualization of TOR network, showing packets flowing largely between Europe and the US.
Torflow visualization of the Tor network (2016)

Each dot on the diagram above demonstrates a waste of resources, as packets make traversals to mask their source. Each packet may be routed and rerouted numerous times. What’s interesting to note is how dark Asia, Africa, and South America were.

Wall Street dark web market arrests in Europe and the US

While things have improved somewhat since 2016, bandwidth in many of these regions still comes at a premium. This is consistent with Jardine’s study. Miscreants such as DarkSide are in those dots, but so too are those who are seeking anonymity for what you might think are legitimate reasons.

One might think that individuals have not been prosecuted for using encrypted technologies, but governments have been successful in infiltrating some parts of the so-called dark web. A recent takedown of a child porn ring followed a large drug bust last year by breaking into Tor network sites is enlightening. First, one wonders how many other criminal enterprises haven’t been discovered. As important, if governments we like can do this, so can others. The European Commission recently funded several rounds of research into distributed trust models. Governance was barely a topic.

Other Forms of Cryptophilia: Oblivious HTTP

A new proposal known as Oblivious HTTP has appeared at the IETF that would have proxies forward encrypted requests to web servers, with the idea of obscuring traceable information about the requestor.

The flow diagram for Obvlivious HTTP shows a client talking through a proxy to a request resource to the target resource.
Oblivious HTTP, from draft-thomson-http-oblivious-01

This will work with simple requests a’la DNS over HTTP, but as the authors note, there are several challenges. The first is that HTTP header information, which would be lost as part of this transaction, actually facilitates the smooth use o the web. This is particularly true with those evil cookies about which we hear so much. Thus any sort of session information would have to be re-created in the encrypted web content, or worse, in the URL itself.

Next, there is a key discovery problem: if one is encrypting end-to-end, one needs to have the correct key for the other end. If one allows for the possibility of receiving such information using non-oblivious methods to the desired web site, then it is possible to obscure the traffic in the future. But then an interloper may know at least that the site was visited once.

The other challenge is that there is no point of obscuring the information if the proxy itself cannot be trusted, and it doesn’t run for free: someone has to pay its bills. This brings us back to Jardine, and who is paying for all of this.

Does encryption actually improve freedom?

Perhaps the best measure of whether encryption has improved freedoms can be found in the place with the biggest barrier to those freedoms on the Internet: China. China is one of the least free countries in the world, according to Freedom House.

Snapshot from Freedom House shows China toward the bottom in terms of Freedoms
From Freedom House

Another view of the same information comes from Global Partners Digital:

Much of Asia has substantial restrictions on encryption.
Freedom to use encryption: not all countries are assessed.

Paradoxically, one might answer the question that freedom and encryption seem to go hand in glove, at least to a certain point. However, the causal effects seem to indicate that encryption is an outgrowth of freedom, and not the other way around. China blocks the use of Tor, as it does many sites through its Great Firewall, and there has been no lasting documented example that demonstrates that tools such as Tor have had a lasting positive impact.

On the other hand, to demonstrate how complex the situation is, and why Jardine’s (and everyone else’s) work is so speculative, it’s not like dissidents and marginalized people are going to stand up for a survey, and say, “Yes, here I am, and I’m subverting my own government’s policies.”

Oppression as a Service (OaaS)

Cryptophiliacs believe that they can ultimately beat out, or at least stay ahead of the authorities, whereas China has shown its great firewall to be fully capable of adapting to new technologies over time. China and others might also employ another tactic: persisting meta-information for long periods of time, until flaws in privacy-enhancing technology can be found.

This gives rise to a nefarious opportunity: Oppression as a Service. Just as good companies will often test out new technology in their own environments, and then sell it to others, so too could a country with a lot of experience at blocking or monitoring traffic. The price they charge might well depend on their aims. If profit is pure motive, some countries might balk at the price. But if ideology is the aim, common interest could be found.

For China, this could be a mere extension of its Belt and Road initiative. Cryptography does not stop oppression. But it may – paradoxically – stop some communication, as our current several Internets continue to fragment into the multiple Internets that former Google CEO Eric Schmidt raised in 2018 thought he was predicting (he was really observing).

Could the individual seeking to have a private conversation with a relative or partner fly under the radar of all of this state mechanism? Perhaps for now. VPN services for visitors to China thrive; but those same services are generally not available to Chinese residents, and the risks of being caught using them may far outweigh the benefits.

Re-establishing Trust: A Government Role?

In the meantime, cyber-losses continue to mount. Like any other technology, the genie is out of the bottle with encryption. But should services that make use of it be encouraged? When does its measurable utility become more a fetish?

By relying on cryptography we may be letting ourselves and others off the hook for their poor behavior. When a technical approach to enable free speech and privacy exists, who says to a miscreant country, “Don’t abuse your citizens”? At what point do we say that, regardless, and at what point do democracies not only take responsibility for their own governments’ bad behavior, but also press totalitarian regimes to protect their citizens?

The answer may lie in the trust models that underpin cryptography. It is not enough to encrypt traffic. If you do so, but don’t know who you are dealing with on the other end, all you have done is limited your exposure to that other end. But trusting that other end requires common norms to be set and enforced. Will you buy your medicines from just anyone? And if you do and they turn out to be poisons, what is your redress? You have none if you cannot establish rules of the Internet road. In other words, governance.

Maybe It’s On Us

Absent the sort of very intrusive government regulation that China imposes, the one argument that cryptophiliacs have in their pocket that may be difficult for anyone to surmount is the idea that, with the right tools, the individual gets to decide this issue, and not any form of collective. That’s no form of governance. At that point we had better all be cryptophiliacs.

We as individuals have a responsibility to decide the impact of our decisions. If buying a bitcoin is going to encourage more waste and prop up criminals, maybe we had best not. That’s the easy call. The hard call is how we support human rights while at the same time being able to stop attacks on our infrastructure, where people can die as a result, but for different reasons.


Editorial note: I had initially misspelled cryptophilia. Thanks to Elizabeth Zwicky for pointing out this mistake.

Let’s not blame Yahoo! for a difficult policy problem

Yahoo!Many in the tech community are upset over reports from The New York Times and others that Yahoo! responded to an order issued by the Foreign Intelligence Surveillance Act Court (FISC) to search across their entire account base a specific “signatures” of people believed to be terrorists.

It is not clear what capabilities Yahoo! already has, but it would not be unreasonable to expect them to have the ability to scan incoming messages for spam and malware, for instance.  What’s more, we are all the better for this sort of capability.  Consider that around 85% of all email is spam, a small amount of which contains malware, and Yahoo! users don’t see most of that.  Much of that can be rejected without Yahoo! having to look at the content by just examining the source IP address of the device attempting to send Yahoo! mail, but in all likelihood they do look at some, as many systems do.  In fact one of the most popular open source systems in the early days known as SpamAssassin did just this.  The challenge from a technical perspective is to implement such a mechanism without the mechanism itself having a large target surface.

If the government asking for certain messages sounds creepy, we have to ask what a signature is.  A signature normally refers to characteristics of a communication that would either identify its source or that it has some quality.  For instance, viruses all have signatures.  In this case, what is claimed is that terrorists communicated in a certain way such that they could be identified.  According to The Times, the government demonstrated probably cause that this was true, and that the signature was “highly unique”*.  That is, the signature likely matches very few actual messages that the government would see, although we don’t know how small that number really is.  Yahoo! has denied having a capability to scan across all messages in their system, but beyond that not enough is public to know what they would have done.  It may well not have been reasonable to search specific accounts because one can easily create an account, and the terrorists may have many.  The government publicly revealing either the probable cause or the signature would tantamount to alerting terrorists that they are in fact investigation, and that they can be tracked.

The risk to civil liberties is that there are no terrorists at all, and this is just a fishing expedition, or worse, persecution of some form.  The FISC and its appellate courts are intended to provide some level of protection against abuse, but in all other cases, the public as a view to whether that abuse is actually occurring.  Many have complained about a lack of transparent oversight of the FISC, but the question is how to have that oversight without alerting The Bad Guys.

The situation gets more complex if one considers that other countries would want the same right to demand information from their mail service providers that the U.S. enjoys, as Yahoo’s own transparency report demonstrates.

In short we are left with a set of difficult compromises that pit gathering of intelligence on terrorists and other criminals against the risk of government abuse.  That’s not Yahoo!’s fault.  This is a hard problem that requires thoughtful consideration of these trade offs, and the timing is right to think about this.  Once again, the Foreign Intelligence Surveillance Act (FISA) will be up for reauthorization in Congress next year.  And in this case, let’s at least consider the possibility that the government is trying to fulfill its responsibility of protecting its citizens and residents, and Yahoo! is trying to be a good citizen in looking at each individual request on its merits and in accordance with relevant laws.


* No I don’t know the difference between “unique” and “highly unique” either.

Comey and Adult Conversations About Encryption

What does an adult conversation over encryption look like? To start we need to understand what Mr. Comey is seeking. Then we can talk about the risks.

AP and others are reporting that FBI director James Comey has asked for “an adult conversation about encryption.” As I’ve previously opined, we need just such a dialog between policy makers, the technical community, and the law enforcement community, so that the technical community has a clear understanding of what it is that investigators really want, and policy makers and law enforcement have a clear understanding of the limits of technology.  At the moment, however, it cannot be about give and take.  Just as no one cannot legislate that π = 3, no one can legislate that lawful intercept can be done in a perfectly secure way.  Mr. Comey’s comments do not quite seem to grasp that notion.  At the same time, some in the technical community do not want to give policy makers to even evaluate the risks for themselves.  We have recently seen stories of the government stockpiling malware kits.  This should not be too surprising, given that at the moment there are few alternatives to accomplish their goals (whatever they are).

So where to start?  It would be helpful to have from Mr. Comey and friends a concise statement as to what access they believe they need, and what problem they think they are solving with that access.  Throughout All of This, such a statement has been conspicuous in its absence.  In its place we have seen sweeping assertions about grand bargains involving the Fourth Amendment.  We need to be specific about what the actual demand from the LI community is before we can have those sorts of debates.  Does Mr. Comey want to be able to crack traffic on the wire?  Does he want access to end user devices?  Does he want access to data that has been encrypted in the cloud?  It would be helpful for him to clarify.

Once we have such a statement, the technical community can provide a view as to what the risks of various mechanisms to accomplish policy goals are.  We’ve assuredly been around the block on this a few times.  The law enforcement community will never obtain a perfect solution.  They may not need perfection.  So what’s good enough for them and what is safe enough for the Internet?  How can we implement such a mechanism in a global context?  And how would the mechanism be abused by adversaries?

The devil is assuredly in the details.

How Much Do You Value Privacy?

People in my company travel a lot, and they like to have their itineraries easily accessible.  My wife wants to know when and where I will be, and that’s not at all unreasonable.  So, how best to process and share that information?  There are now several services that attempt to help you manage it.  One of those services, TripIt.Com, will take an email message as input, organize your itinerary, generate appropriate calendar events, and share that information with those you authorize.

The service is based in the U.S., and might actually share information with those you do not authorize, to market something to you- or worse.  If the information is stolen, as was the case with travel information from a hotel we discussed recently, it can be resold to burglars who know when you’re way.  That can be particularly nasty if in fact only you are away, and the rest of your family is not.

But before we panic and refuse to let any of this information out, one should ask just how secure that information is.  As it happens, travel itineraries are some of the least secure pieces of information you can possibly have.  All a thief really needs is an old ticket stub that has one’s frequent flyer number, and we’re off to the races.  In one case, it was shown that with this information a thief could even book a ticket for someone else.

So how, then, do we evaluate the risk of using a service like TripIt? First of all, TripIt does not use any form of encryption or certificate trust chain to verify their identity.  That means that all of your itinerary details go over the network in the clear.  But as it turns out, you’ve probably already transmitted all of your details in the clear to them by sending the itinerary in email.  Having had a quick look at their mail servers, they do not in fact verify their server identities through the use of STARTTLS, not that you as a user can easily determine this in advance.

Some people might have stopped now, but others have more tolerance for risk.

Perhaps a bigger problem with TripIt is that neither its password change page nor its login page make use of SSL.  That means that when enter your your password, the text of that password goes over the network in the clear, for all to see.  It also means that you cannot be sure that the server on the other end is actually that of TripIt.  To me this is a remarkable oversight.

For all of these concerns, you still get the ability to generate an iCal calendar subscription as well as the ability to share all of this information with friends and family.  Is it worth it?  One answer is that it depends on whether you actually want to enter the information yourself, whether you care about security concerns, and whether you like using calendaring clients.  It also depends on what other services are available.

Another service that is available is Dopplr.  It also attempts to be a social networking site, not unlike Linked In.  Dopplr allows you to share you itineraries with other people, tells you about their upcoming trips (if they’re sharing with you), and it lets you create an iCal subscription.

Dopplr also has some security problems, in that they do not use SSL to protect your password.  They also do not use SSL for their main pages.  They do, however, support OpenId, an attempt to do away with site passwords entirely.  I’ll say more about OpenId in the future, but for now I’ll state simply that just because something is new does not make it better.  It may be better or worse.

And so there you have it.  Two services, both with very similar offerings, and both with almost the same privacy risks.  One of them, by the way, could distinguish themselves by improving their privacy offering.  That would certainly win more of my business.