Cyber-policing again: where is the social compact?

Private companies are making public policy, with no societal agreement on what powers governments should and should not have to address cybercrime.

A few of us have been having a rather public discussion about who should be policing the Internet and how. This began with someone saying that he had a good conversation with a mature law enforcement official who was not himself troubled by data encryption in the context of Child Sexual Abuse Material (CSAM) on the Internet.

I have no doubt about the professionalism of the officer or his colleagues.  It is dogma in our community that child online protection is a crutch upon which policy makers and senior members of the law enforcement agencies rest, and we certainly have seen grandstanding by those who say, “protect the children”.  But that doesn’t mean there isn’t a problem.

Perhaps in that same time frame you may have seen this report by Michael Keller and Gabriel Dance in the New York Times.  That would be 45 million images, 12 million reports of which were at the time passing through FB messenger.  Those were the numbers in 2019, and they were exploding then.  In some cases these images were hiding in plain sight.  Is 45 million a large number?  Who gets to say?

Law enforcement will use the tools they have. 

We have also seen people object to June’s massive sting operation that led to the bust of hundreds of people, disrupting a drug gang network.  At the same time, leading legal scholars have highlighted that the sixth amendment of the US Constitution (amongst others) has been gutted with regard to electronic evidence, because the courts in America have said that private entities cannot be compelled to produce their source or methods, even when those entities are used by law enforcement.  In one case, a conviction stood, even though the police contracted the software and then couldn’t produce it.

By my score, then, many don’t like the tools law enforcement doesn’t have, and many don’t like the tools law enforcement does have.  Seems like the basis for a healthy dialog.

Friend and colleague John Levine pointed out that people aren’t having dialog but are talking past each other, and concluding the other side is being unreasonable because of “some fundamental incompatible assumptions”. You can read his entire commentary here.

I agree, and it may well be due to some fundamental incompatible assumptions, as John described.    I have said in the past that engineers make lousy politicians and politicians make lousy engineers.  Put in a less pejorative form, the generalization of that statement is that people are expert in their own disciplines, and inexpert elsewhere.  We have seen politicians playing the role of doctors too, and they don’t do a good job there either; but the US is in a mess because most doctors aren’t political animals.  And don’t get me started on engineers, given the recent string of legislation around encryption in places like Australia and the UK.

John added:

It’s not like we haven’t tried to explain this, but the people who believe in the wiretap model believe in it very strongly, leading them to tell us to nerd harder until we make it work their way, which of course we cannot.

This relates to a concern that I have heard, that some politicians want the issue and not the solution. That may well be true.  But in the meantime, FaceBook and Google have indeed found ways to reduce CSAM on their platforms; and it seems to me that Apple has come up with an innovative approach to do the same, while still encrypting communications and data at rest.  They have all “nerded harder”, trying to strike a balance between the individual’s privacy and other hazards such as CSAM (amongst other problems).  Good for them!

Is there a risk with the Apple approach?  Potentially, but it is not as John described, that we are one disaffected clerk away from catastrophe.  What I think we heard from at least some corners wasn’t that, but rather a slippery slope argument in which Apple’s willingness to prevent CSAM might be exploited to limit political speech; and (2) that the approach will be gotten around through double encryption.

I have some sympathy for both arguments, but even if we add the catastrophe theory back into the mix, the fundamental question I asked some time ago remains: who gets to judge all of these risks and decide?  The tech companies?  A government?  Multiple governments?  Citizens?  Consumers?

The other question is whether some standard (a’la the 6th Amendment) should be in play prior to anyone giving up any information.  To that I would only say that government exists as a compact, and that foundational documents such as the Constitution must serve the practical needs of society, and that includes both law enforcement and preventing governmental abuse. If the compact of the 18th century can’t be held, what does a compact of the 21st century look like?

Yet more research and yet more dialogue is required.


Pas Parler?

Will the real Internet government please stand up?

Parler in Prison

This weekend, Google, Apple, and Amazon all took steps to remove the right wing conspiracy web site Parler from their services, steps that will cripple the social media site for some some period of time. In many ways, Parler had it coming to them. Amazon in particular alleged that Parler refused to take prompt action to remove abusive content that violated their terms of service.

In response, my right wing friends have gone nearly indiscriminately crazy, complaining that their 1st Amendment rights have been violated. Let’s review that amendment of the U.S. Constitution:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Amendment I of the U.S. Constitution.

In other words, Congress cannot stop someone from speaking. But these companies are not Congress, nor an arm of the U.S. government. We could, however, say that they are a form of government, in as much as these companies, along with a small number of other ones, such as TikTok control societal discourse. What rules would govern them if they decided that moveon.org was also not to their liking? Could these services exclude content that criticizes them?

Parler is a relative newcomer. Much in the same way that Fox News has lost its conservative gleam to NewsMax, Facebook and Twitter lost their gleam when they started applying editorial control to posts. They did this because they gauged societal harm against whatever short term revenue they were collecting from the likes of Donald Trump. There was seemingly no reason they had to, at least in the United States. U.S. Law says this:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

47 USC § 230

Meddle with this rule at your peril. If we shift the burden of policing to online services, social media sites as we know them will cease to be, GMail and Yahoo! mail would be imperiled, and Amazon could no longer offer customer reviews. If there is a middle ground to be found, then scale factors must be considered. Any middle ground may well increase the risks of starting up new services. If the price of entry for a new Facebook or Twitter competitor is fancy artificial intelligence systems and patents, then we may have done ourselves no service in the long run.

The United Social Networks Nations
The United Social Networks Nations

There are other consequences to Apple and Google removing Parler from their respective phone and tablet stores: I saw one conversation in which someone was describing to her friends how to turn off automatic software updates. Software updates are the means by which developers correct vulnerabilities they have created. By disabling those updates, people leave themselves vulnerable to attack.

Today Parler is losing its voice, arguably for very deserved reasons. Tomorrow, some other site might lose its access. Will those reasons be just as good and who will decide?

Ain’t No Perfect. That’s why we need network protection.

If Apple can blow it, so too can the rest of us. That’s why a layered defensive approach is necessary.

When we talk about secure platforms, there is one name that has always risen to the top: Apple.  Apple’s business model for iOS has been repeatedly demonstrated to provide superior security results over its competitors.  In fact, Apple’s security model is so good that governments feel threatened enough by it that we have had repeated calls for some form of back door into their phones and tablets.  CEO Tim Cook has repeatedly taken the stage to argue for such strong protection, and indeed I personally have  friends who I know take this stuff so seriously that they lose sleep over some of the design choices that are made.

And yet this last week, we learned of a vulnerability that was as easy to exploit as to type “root” twice in order to gain privileged access.

Wait what?

 

Wait. What?

 

 

Ain’t no perfect.

If the best and the brightest of the industry can occasionally have a flub like this, what about the rest of us?  I recently installed a single sign-on package from Ping Identity, a company whose job it is to provide secure access.  This simple application that generates cryptographically generated sequences of numbers to be used as passwords is over 70 megabytes, and includes a complex Java runtime environment (JRE).  How many bugs remain hidden in those hundreds of thousands of lines of code?

Now enter the Internet of Things, where manufacturers of devices that have not traditionally been connected to the network have not been expert at security for decades.  What sort of problems lurk in each and every one of those devices?

It is simply not possible to assure perfect security, and because computers are designed by imperfect humans, all these devices are imperfect.  Even devices that we believe are secure today will have vulnerabilities exposed in the future.  This is one of the reasons why the network needs to play a role.

The network stands between you and attackers, even when devices have vulnerabilities.  The network is best in a position to protect your devices when it knows what sort of access a device needs to operate properly.  That’s your washing machine.  But even for your laptop, where you might want to access whatever you want to access, whenever you want to access it, through whatever system you wish to use, informing the network makes it possible to stop all communications that you don’t want.  To be sure, endpoint manufacturers should not rely solely on network protection.  Devices should be built with as much protection as is practicable and affordable.  The network provides an additional layer of protection.

Endpoint manufacturers thus far have not done a good job in making use of the network for protection.  That requires a serious rethink, and Apple is the posture child as to why.  They are the best and the brightest, and they got it wrong this time.

Court Order to Apple to Unlock San Bernardino iPhone May Unlock Hackers

A judge’s order that Apple cooperate with federal authorities in the San Bernardino bombing investigation may have serious unintended consequences. There are no easy answers. Once more, a broad dialog is required.

Scales of JusticePreviously I opined about how a dialog should occur between policy makers and the technical community over encryption.  The debate has moved on.  Now, the New York Times reports that federal magistrate judge Sheri Pym has ordered Apple to facilitate access to the iPhone of Syed Rizwan Farook, one of the San Bernardino bombers.  The Electronic Frontier Foundation is joining Apple in fight against the order.

The San Bernardino fight raises both technical and policy questions.

Can Apple retrieve data off the phone?

Apparently not.  According to the order, Apple is required to install an operating system that would allow FBI technicians to make as many password attempts as they can without the device delaying them or otherwise deleting any information.  iPhones have the capability of deleting all personal information after a certain number of authentication failures.

You may ask: why doesn’t the judge just order Apple to create an operating system that doesn’t require a password?  According to Apple,  the password used to access the device is itself a key encrypting key (KEK) that is used to gain access to decrypt the key that itself then decrypts stored information.  Thus, bypassing the password check doesn’t get you any of the data.  Thus, the FBI needs the password.

What Apple can do is install a new operating system without the permission of the owner.  There are good reasons for them to have this ability.  For one, it is possible that a previous installation failed or that the copy of the operating system stored on a phone has been corrupted in some way.  If technicians couldn’t install a new version, then the phone itself would become useless.  This actually happened to me, personally, as it happens.

The FBI can’t build such a version of the operating system on their own.  As is best practice, iPhones validate that all operating systems are properly digitally signed by Apple.  Only Apple has the keys necessary to sign imagines.

With a new version of software on the iPhone 5c, FBI technicians would be able to effect a brute force attack, trying all passwords, until they found the right one.  This won’t be effective on later model iPhones because their hardware slows down queries, as detailed in this blog.

Would such a capability amount to malware?

Kevin S. Bankston, director of New Americas Open Technology Institute has claimed that the court is asking Apple to create malware for  the FBI to use on Mr. Farook’s device.  There’s no single clean definition of malware, but a good test as to whether the O/S the FBI is asking for is in fact malware is this: if this special copy of the O/S leaked from the FBI, could “bad guys” (for some value of “bad guys”) also use the software against the “good guys” (for some value of “good guys”)?  Apple has the ability to write into the O/S a check to determine the serial number of the device.  It would not be possible for bad guys to modify that number without invalidating the signature the phone would check before loading.  Thus, by this definition, the software would not amount to malware.  But I wouldn’t call it goodware, either.

Is a back door capability desirable?

Unfortunately, here there are no easy answers, but trade-offs.  On the one hand, one must agree that the FBI’s investigation is impeded by the lack of access to Mr. Farook’s iPhone, and as other articles show, this case is neither the first, nor will it be the last, of its kind.  As a result, agents may not be able to trace leads to other possible co-conspirators.  A  Berkman Center study claims that law enforcement has sufficient access to metadata to determine those links, and there’s some reason to believe that.  When someone sends an email, email servers between the sender and recipient keep a log that a message was sent from one person to another.  A record of phone calls is kept by the phone company.  But does Apple keep a record of FaceTime calls?  Why would they if it meant a constant administrative burden, not to mention additional liability and embarrassment, when (not if) they suffer a breach?  More to the point, having access to the content on the phone provides investigators clues as to what metadata to look for, based on what applications were installed and used on the phone.

If Apple had the capability to access Mr. Farook’s iPhone, the question would then turn to how it would be overseen.  The rules about how companies  handle customer data vary from one jurisdiction to another.  In Europe, the Data Privacy Directive is quite explicit, for instance.  The rules are looser in the United States.  Many are worried that if U.S. authorities have access to data, so will other countries, such as China or Russia.  Those worries are not unfounded: a technical capability knows nothing of politics.  Businesses fear that if they accede to U.S. demands, they must also accede to others if they wish to sell products and services in those countries.  This means that there’s billions of dollars at stake.  Worse, other countries may demand more intrusive mechanisms.  As bad as that is, and it’s very bad, there is worse.

The Scary Part

If governments start ordering Apple to insert or create malware, what other technology will also come under these rules?  It is plain as day that any rules that apply to Apple iPhones would also apply to Android-based cell phones.  But what about other devices, such as  televisions?  How about  Refrigerators?  Cars?  Home security systems?  Baby monitoring devices?  Children’s Toys?  And this is where it gets really scary.  Apple has one of the most competent security organizations in the world.  They probably understand device protection better than most government clandestine agencies.  The same cannot be said for other device manufacturers.  If governments require these other manufacturers to provide back door access to them, it would be tantamount to handing the keys to all our home to criminals.

To limit this sort of damage, there needs to be a broad consensus as to what sorts of devices governments should be able to access, under what circumstances that access should happen, and how that access will be overseen to avert abuse.  This is not an easy conversation.  That’s the conversation Apple CEO Tim Cook is seeking.  I agree.

It doesn’t matter that much that Apple and Google encrypts your phone

Apple’s and Google’s announcements that they will encrypt information on your phone are nice, but won’t help much. Most data is in the cloud, these days; and your protections in the cloud are governed by laws of numerous countries, almost all of which have quite large exceptions.

CybercrimeAt the Internet Engineering Task Force we have taken a very strong stand that pervasive surveillance is a form of attack.  This is not a matter of lack of trust of any one organization, but rather a statement that if one organization can snoop on your information, others will be able to do so as well, and they may not be so nice as the NSA.  The worst you can say about the NSA is that a few analysts got carried away and spied on their partners.  With real criminals it’s another matter.  As we have seen with Target, other large department stores, and now JP Morgan, theirs is a business, and you are their commodity, in the form of private information and credit card numbers.

So now here comes Apple, saying that they will protect you from the government.  Like all technology, this “advance” has its pluses and minuses.  To paraphrase a leader in the law enforcement community, everyone wants their privacy until it’s their child at risk.  However, in the United States, at least, we have a standard that the director of the FBI seems to have forgotten- it’s called probable cause.  It’s based on a dingy pesky old amendment to the Constitution which states:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

So what happens if one does have probable cause?  This is where things get interesting.  If one has probable cause to believe that there is an imminent threat to life or property and they can’t break into a phone, then something bad may happen.  Someone could get hurt, for instance.  Is that Apple’s fault?  And who has the right to interpret and enforce the fourth amendment?  If Apple has a right to do so, then do I have the right to interpret what laws I will?  On the other hand, Apple might respond that it has no responsibility to provide law enforcement anything, and all it is doing is exercising the right of free speech to deliver a product that others use to communicate with.  Cryptographer and Professor Daniel Bernstein successfully argued this case in the 9th Circuit in the 1990s.  And he was right to do so, because going back to the beginning of this polemic, even if you believe your government to be benevolent, if it can access your information, so can a bad guy, and there are far more bad guys out there.

Apple hasn’t simply made this change because it doesn’t like the government.  Rather, the company has recognized that for consumers to put private information into their phone, they must trust the device to not be mishandled by others.  At the same time, Apple has said through their public statements that information that goes into their cloud is still subject to lawful seizure.  And this brings us back to the point that President Obama made at the beginning of the year: government risk isn’t the only form of risk.  The risk remains that private aggregators of information – like Apple and Google or worse, Facebook– will continue to use your information for whatever purposes they see fit.  If you don’t think this is the case, ask how much you pay for their services?

And since most of the data about your or that you own is either in the cloud or heading to the cloud, you might want to worry less about the phone or tablet, and more about where your data actually resides.  If you’re really concerned about governments, then you might also want to ask this question:  which governments can seize your data?  The answer to that question is not straight forward, but there are three major factors:

  1. Where the data resides;
  2. Where you reside;
  3. Where the company that controls the data resides.

For instance, If you reside in the European Union, then nominally you should receive some protection from the Data Privacy Directive.  Any company that serves European residents has to respect the rights specified in that.  On the other hand, there are of course exceptions for law enforcement.  If a server resides in some random country, however, like the Duchy of Grand Fenwick, perhaps there is a secret law that states that operators must provide the government all sorts of data and must not tell anyone they are doing so.  That’s really not so far from what the U.S. government did with National Security Letters.There’s a new service that Cisco has rolled out, called the Intercloud that neatly addresses this matter for large enterprises, providing a framework to keep some data local, and some data in the cloud, and the enterprise has some control over which.  Whether that benefit will extend to consumers is unclear.In the end I conclude that people who are truly worried about their data need to consider what online services they use, including Facebook, this blog you are reading right now, Google, Amazon, or anyone else.  They also have to consider how if at all they are using the cloud.  I personally think they have to worry less about physical devices, and that largely speaking Apple’s announcement is but a modest improvement in overall security.  The same could be said for IETF efforts.