Private companies are making public policy, with no societal agreement on what powers governments should and should not have to address cybercrime.
A few of us have been having a rather public discussion about who should be policing the Internet and how. This began with someone saying that he had a good conversation with a mature law enforcement official who was not himself troubled by data encryption in the context of Child Sexual Abuse Material (CSAM) on the Internet.
I have no doubt about the professionalism of the officer or his colleagues. It is dogma in our community that child online protection is a crutch upon which policy makers and senior members of the law enforcement agencies rest, and we certainly have seen grandstanding by those who say, “protect the children”. But that doesn’t mean there isn’t a problem.
Perhaps in that same time frame you may have seen this report by Michael Keller and Gabriel Dance in the New York Times. That would be 45 million images, 12 million reports of which were at the time passing through FB messenger. Those were the numbers in 2019, and they were exploding then. In some cases these images were hiding in plain sight. Is 45 million a large number? Who gets to say?
Law enforcement will use the tools they have.
We have also seen people object to June’s massive sting operation that led to the bust of hundreds of people, disrupting a drug gang network. At the same time, leading legal scholars have highlighted that the sixth amendment of the US Constitution (amongst others) has been gutted with regard to electronic evidence, because the courts in America have said that private entities cannot be compelled to produce their source or methods, even when those entities are used by law enforcement. In one case, a conviction stood, even though the police contracted the software and then couldn’t produce it.
By my score, then, many don’t like the tools law enforcement doesn’t have, and many don’t like the tools law enforcement does have. Seems like the basis for a healthy dialog.
Friend and colleague John Levine pointed out that people aren’t having dialog but are talking past each other, and concluding the other side is being unreasonable because of “some fundamental incompatible assumptions”. You can read his entire commentary here.
I agree, and it may well be due to some fundamental incompatible assumptions, as John described. I have said in the past that engineers make lousy politicians and politicians make lousy engineers. Put in a less pejorative form, the generalization of that statement is that people are expert in their own disciplines, and inexpert elsewhere. We have seen politicians playing the role of doctors too, and they don’t do a good job there either; but the US is in a mess because most doctors aren’t political animals. And don’t get me started on engineers, given the recent string of legislation around encryption in places like Australia and the UK.
John added:
It’s not like we haven’t tried to explain this, but the people who believe in the wiretap model believe in it very strongly, leading them to tell us to nerd harder until we make it work their way, which of course we cannot.
This relates to a concern that I have heard, that some politicians want the issue and not the solution. That may well be true. But in the meantime, FaceBook and Google have indeed found ways to reduce CSAM on their platforms; and it seems to me that Apple has come up with an innovative approach to do the same, while still encrypting communications and data at rest. They have all “nerded harder”, trying to strike a balance between the individual’s privacy and other hazards such as CSAM (amongst other problems). Good for them!
Is there a risk with the Apple approach? Potentially, but it is not as John described, that we are one disaffected clerk away from catastrophe. What I think we heard from at least some corners wasn’t that, but rather a slippery slope argument in which Apple’s willingness to prevent CSAM might be exploited to limit political speech; and (2) that the approach will be gotten around through double encryption.
I have some sympathy for both arguments, but even if we add the catastrophe theory back into the mix, the fundamental question I asked some time ago remains: who gets to judge all of these risks and decide? The tech companies? A government? Multiple governments? Citizens? Consumers?
The other question is whether some standard (a’la the 6th Amendment) should be in play prior to anyone giving up any information. To that I would only say that government exists as a compact, and that foundational documents such as the Constitution must serve the practical needs of society, and that includes both law enforcement and preventing governmental abuse. If the compact of the 18th century can’t be held, what does a compact of the 21st century look like?
Yet more research and yet more dialogue is required.
Encryption makes the Internet possible, but there are some controversial and other downright stupid uses for which we all pay.
Imagine someone creating or supporting a technology that consumes vast amounts of energy only to produce nothing of intrinsic value and being proud of that of that fact. Such is the mentality of Bitcoin supporters. As the Financial Times reported several days ago, Bitcoin mining, the process by which this electronic fools’ gold is “discovered”, takes up as much power as a small country. And for what?
The euro, yen, and dollar are all tied to the fortunes and monetary policies of societies as represented by various governments. Those currencies are all governed by rules of their societies. Bitcoin is an attempt to strip away those controls. Some simply see cryptocurrencies as a means to disrupt the existing banking system, in order to nab a bit of the financial sector’s revenue. If so, right now they’re not succeeding.
In fact nothing about cryptocurrency is succeeding, while people waste a tremendous amount of resources. Bitcoin has been an empty speculative commodity and a vehicle for criminals to receive ransoms and other fees, as happened recently when the Colonial Pipeline paid a massive $4.4 million to DarkSide, a gang of cyber criminals.
What makes this currency attractive to hackers is that otherwise intelligent people purchase and promote the pseudo-currency. Elon Musk’s abrupt entrance and exit (that some might call Pump and Dump), demonstrates how fleeting that value may be.
Bitcoin is nothing more than an expression of what some would call crypto-governance, a belief that somehow technology is above it all and somehow is its own intrinsic benefit to some vague society. I call it cryptophilia: an unnatural and irrational love of all things cryptography, in an attempt to defend against some government, somewhere.
Cryptography As a Societal Benefit
Let’s be clear: Without encryption there could be no Internet. That’s because it would simply be too easy for criminals to steal information. And as is discussed below, we have no shortage of criminals. Today, thanks to efforts by people like letencrypt.org, the majority of traffic on the Internet is encrypted, and by and large this is a good thing.
This journey took decades, and it is by no means complete.
Some see encryption as a means by those in societies who lack basic freedoms as a means to express themselves. The argument goes that in free societies, governments are not meant to police our speech or our associations, and so they should have no problem with the fact that we choose to do so out of their ear shot, the implication being that governments themselves are the greatest threat to people.
Distilling Harm and Benefit
Bitcoin is an egregious example of how this can go very wrong. A more complicated case to study is the Tor network, which obscures endpoints through a mechanism known as onion routing. The proponents of Tor claim that it protects privacy and enables human rights. Critics find that Tor is used for illicit activity. Both may be right.
Back in 2016, Matthew Prince, the CEO of Cloudflare reported that, “Based on data across the CloudFlare network, 94% of requests that we see across the Tor network are per se malicious.” He went on to highly that a large portion of spam originated in some way from the Tor network.
One recent study by Eric Jardine and colleagues has shown that some 6.7% of all ToR requests are likely malicious activity. The study also asserts that so-called “free” countries are bearing the brunt of the cost of Tor, both in terms of infrastructure and crime. The Center for Strategic Studies quantifies the cost at $945 billion, annually, with the losses having accelerated by 50% over two years. The Tor network is key enabling technology for the criminals who are driving those costs, as the Colonial Pipeline attack so dramatically demonstrated.
Each dot on the diagram above demonstrates a waste of resources, as packets make traversals to mask their source. Each packet may be routed and rerouted numerous times. What’s interesting to note is how dark Asia, Africa, and South America were.
While things have improved somewhat since 2016, bandwidth in many of these regions still comes at a premium. This is consistent with Jardine’s study. Miscreants such as DarkSide are in those dots, but so too are those who are seeking anonymity for what you might think are legitimate reasons.
One might think that individuals have not been prosecuted for using encrypted technologies, but governments have been successful in infiltrating some parts of the so-called dark web. A recent takedown of a child porn ring followed a large drug bust last year by breaking into Tor network sites is enlightening. First, one wonders how many other criminal enterprises haven’t been discovered. As important, if governments we like can do this, so can others. The European Commission recently funded several rounds of research into distributed trust models. Governance was barely a topic.
Other Forms of Cryptophilia: Oblivious HTTP
A new proposal known as Oblivious HTTP has appeared at the IETF that would have proxies forward encrypted requests to web servers, with the idea of obscuring traceable information about the requestor.
This will work with simple requests a’la DNS over HTTP, but as the authors note, there are several challenges. The first is that HTTP header information, which would be lost as part of this transaction, actually facilitates the smooth use o the web. This is particularly true with those evil cookies about which we hear so much. Thus any sort of session information would have to be re-created in the encrypted web content, or worse, in the URL itself.
Next, there is a key discovery problem: if one is encrypting end-to-end, one needs to have the correct key for the other end. If one allows for the possibility of receiving such information using non-oblivious methods to the desired web site, then it is possible to obscure the traffic in the future. But then an interloper may know at least that the site was visited once.
The other challenge is that there is no point of obscuring the information if the proxy itself cannot be trusted, and it doesn’t run for free: someone has to pay its bills. This brings us back to Jardine, and who is paying for all of this.
Does encryption actually improve freedom?
Perhaps the best measure of whether encryption has improved freedoms can be found in the place with the biggest barrier to those freedoms on the Internet: China. China is one of the least free countries in the world, according to Freedom House.
Paradoxically, one might answer the question that freedom and encryption seem to go hand in glove, at least to a certain point. However, the causal effects seem to indicate that encryption is an outgrowth of freedom, and not the other way around. China blocks the use of Tor, as it does many sites through its Great Firewall, and there has been no lasting documented example that demonstrates that tools such as Tor have had a lasting positive impact.
On the other hand, to demonstrate how complex the situation is, and why Jardine’s (and everyone else’s) work is so speculative, it’s not like dissidents and marginalized people are going to stand up for a survey, and say, “Yes, here I am, and I’m subverting my own government’s policies.”
Oppression as a Service (OaaS)
Cryptophiliacs believe that they can ultimately beat out, or at least stay ahead of the authorities, whereas China has shown its great firewall to be fully capable of adapting to new technologies over time. China and others might also employ another tactic: persisting meta-information for long periods of time, until flaws in privacy-enhancing technology can be found.
This gives rise to a nefarious opportunity: Oppression as a Service. Just as good companies will often test out new technology in their own environments, and then sell it to others, so too could a country with a lot of experience at blocking or monitoring traffic. The price they charge might well depend on their aims. If profit is pure motive, some countries might balk at the price. But if ideology is the aim, common interest could be found.
For China, this could be a mere extension of its Belt and Road initiative. Cryptography does not stop oppression. But it may – paradoxically – stop some communication, as our current several Internets continue to fragment into the multiple Internets that former Google CEO Eric Schmidt raised in 2018 thought he was predicting (he was really observing).
Could the individual seeking to have a private conversation with a relative or partner fly under the radar of all of this state mechanism? Perhaps for now. VPN services for visitors to China thrive; but those same services are generally not available to Chinese residents, and the risks of being caught using them may far outweigh the benefits.
Re-establishing Trust: A Government Role?
In the meantime, cyber-losses continue to mount. Like any other technology, the genie is out of the bottle with encryption. But should services that make use of it be encouraged? When does its measurable utility become more a fetish?
By relying on cryptography we may be letting ourselves and others off the hook for their poor behavior. When a technical approach to enable free speech and privacy exists, who says to a miscreant country, “Don’t abuse your citizens”? At what point do we say that, regardless, and at what point do democracies not only take responsibility for their own governments’ bad behavior, but also press totalitarian regimes to protect their citizens?
The answer may lie in the trust models that underpin cryptography. It is not enough to encrypt traffic. If you do so, but don’t know who you are dealing with on the other end, all you have done is limited your exposure to that other end. But trusting that other end requires common norms to be set and enforced. Will you buy your medicines from just anyone? And if you do and they turn out to be poisons, what is your redress? You have none if you cannot establish rules of the Internet road. In other words, governance.
Maybe It’s On Us
Absent the sort of very intrusive government regulation that China imposes, the one argument that cryptophiliacs have in their pocket that may be difficult for anyone to surmount is the idea that, with the right tools, the individual gets to decide this issue, and not any form of collective. That’s no form of governance. At that point we had better all be cryptophiliacs.
We as individuals have a responsibility to decide the impact of our decisions. If buying a bitcoin is going to encourage more waste and prop up criminals, maybe we had best not. That’s the easy call. The hard call is how we support human rights while at the same time being able to stop attacks on our infrastructure, where people can die as a result, but for different reasons.
Editorial note: I had initially misspelled cryptophilia. Thanks to Elizabeth Zwicky for pointing out this mistake.
Pew should evolve the questions they are asking and the advice they are giving based on how the threat environment is changing. But they should keep asking.
Last year, Pew Research surveyed just over 1,000 people to try to get a feel for how informed they are about cybersecurity. That’s a great idea because it informs us as a society as to how well consumers are able to defend themselves against common attacks. Let’s consider some ways that this survey could be evolved, and how consumers can mitigate certain common risks. Keep in mind that Pew conducted the survey in June of last year in a fast changing world.
Several of the questions related to phishing, Wifi access points and VPNs. VPNs have been in the news recently because of the Trump administration’s and Congress’ backtracking on privacy protections. While privacy invasion by service providers is a serious problem, accessing one’s bank at an open access point is probably considerably less so. There are two reasons for this. First, banks almost all make use of TLS to protect communications. Attempts to fake bank sites by intercepting communications will, at the very least produce a warning that browser manufacturers have made increasingly difficult to bypass. Second, many financial institutions make use of apps in mobile devices that take some care to validate that the user is actually talking to their service. In this way, these apps actually mark a significant reduction in phishing risk. Yes, the implication is that using a laptop with a web browser is a slightly riskier means to access your bank than the app it likely provides, and yes, there’s a question hiding there for Pew in its survey.
Another question on the survey refers to password quality. While this is something of a problem, there are two bigger problems hiding that consumers should understand:
Reuse of passwords. Consumers will often reuse passwords simply because it’s hard to remember many of them. Worse, many password managers themselves have had vulnerabilities. Why not? It’s like the apocryphal Willie Sutton quote about robbing banks because that’s where the money is. Still, with numerous break-ins, such as those that occurred with Yahoo! last year*, and the others that have surely gone unreported or unnoticed, re-use of passwords is a very dangerous practice.
Aggregation of trust in smart phones. As recent articles about American Customs and Border Patrol demanding access to smart phones demonstrate, access to many services such as Facebook, Twitter, and email can be gained just by gaining access to the phone. Worse, because SMS and email are often used to reset user passwords, access to the phone itself typically means easy access to most consumer services.
One final area that requires coverage: as the two followers of my blog are keenly aware, IoT presents a whole new class of risk that Pew has yet to address in its survey.
The risks I mention were not well understood as early as five years ago. But now they are, and they have been for at least the last several years. Pew should keep surveying, and keep informing everyone, but they should also evolve the questions they are asking and the advice they are giving.
* Those who show disdain toward Yahoo! may find they themselves live in an enormous glass house.
It’s rare that hackers give you a gift, but last week that’s exactly what happened. Brian Krebs is one of the foremost security experts in the industry, and his well known web site krebsonsecurity.com was brought down due to a distributed denial of service (DDoS) attack. Attackers made use of what is said to be the largest botnet ever to attack Akamai, Kreb’s content service provider.
Why would one consider this a gift? First of all, nobody was hurt. This attack took down a web site that is not critical to anyone’s survival, not even Krebs’, and the web site was rehomed and back online in a very short period of time.
Second, the attackers revealed at least some of their capabilities by lighting up the network of hacked devices for researchers to examine and eventually take town. One aspect of this attack is the use of “IoT” devices, or non-general purpose computers that are used to control some other function. According to Krebs, the attacks made use of thermostats, web cameras, digital video recorders (DVRs) and, yes, Internet routers. The attacks themselves created an HTTP connection to the web site, retrieved a page, and closed. That’s a resource intensive attack from the defense standpoint.
Let’s ask this question: why would any of those systems normally talk to anything other than a small number of cloud services that are intended to support them? This is what Manufacturer Usage Descriptions (MUD) is meant to defend against. MUD works by providing a formal language and mechanism for manufacturers to specify which systems a device is designed to connect with. The converse, therefore, is that the network can prevent the device from both being attacked and attacking others. The key to all of this are manufacturer and their willingness to describe these devices. The evolving technical details of MUD can be found in an Internet Draft, and you can create a test MUD file against that draft by using MUD File Maker. I’ll go into more detail about MUD File Maker in a later post.
Would MUD eliminate all attacks? No, but MUD adds an additional helpful layer of protection to those manufacturers and networks should use.
This time it was a blog that was taken down. We are in a position to reduce attacks the next time, when they may be more serious. That’s the gift hackers gave us this time. Now we just need to act.
In Yahoo!’s announcement of the theft of 500 million accounts, the Chief Information Security Officer Bob Lord wrote that the company believes a “state-sponsored actor” was behind the attack. What does that mean and how would Yahoo! come to this conclusion?
The term “state-sponsored” is vague. It could means someone who works for a government, or it could mean someone who has in effect been contracted out by a government. Both Russia and China have been accused of this sort of behavior in the past. In the case of Russia, there are two well known hacking organizations, Cozy Bear and Fancy Bear that the Washington Post previously reported were involved in the cyberattack against the Democratic National Committee’s systems. In the case of China, the Elderwood Group was accused of taking part in a successful phishing attack against His Holiness, the Dalai Lama.
But why does Yahoo! believe that the culprit is one of these groups and not any other hacker? There are several possibilities:
Perhaps the botnet systems used used to gain access to the Yahoo! passwords were the same as those used in an earlier attack in which a state-sponsored actor was known to be involved; or
The code used to break into Yahoo!’s internal network was the same or similar to code used in an earlier attack that is known to be from one of these groups; or
The investigation has been able to determine where the control systems of an attack are and who is accessing them.
As my friend points out, governments aren’t in this for the money but for some other purpose. That means that stolen information isn’t likely to hit the black market anytime soon. In this case, by the time Yahoo! discovered the problem, the breach was two years old.
Finding proof beyond a reasonable doubt will be difficult. Consider this: it is possible for the Chinese to make use of a botnet run in Russia or America, or for America to operate a botnet in China to attack systems in Russia, just to lend the appearance as to who the source is, without revealing who the actual source is.
The only fundamental solution to this sort of attack is better end system security. Only when botnets have dried up can we establish the true source of attacks. Maybe in my lifetime this will happen. Maybe. But that means a lot of people have to do a lot of work.