Cyber-policing again: where is the social compact?

Private companies are making public policy, with no societal agreement on what powers governments should and should not have to address cybercrime.

A few of us have been having a rather public discussion about who should be policing the Internet and how. This began with someone saying that he had a good conversation with a mature law enforcement official who was not himself troubled by data encryption in the context of Child Sexual Abuse Material (CSAM) on the Internet.

I have no doubt about the professionalism of the officer or his colleagues.  It is dogma in our community that child online protection is a crutch upon which policy makers and senior members of the law enforcement agencies rest, and we certainly have seen grandstanding by those who say, “protect the children”.  But that doesn’t mean there isn’t a problem.

Perhaps in that same time frame you may have seen this report by Michael Keller and Gabriel Dance in the New York Times.  That would be 45 million images, 12 million reports of which were at the time passing through FB messenger.  Those were the numbers in 2019, and they were exploding then.  In some cases these images were hiding in plain sight.  Is 45 million a large number?  Who gets to say?

Law enforcement will use the tools they have. 

We have also seen people object to June’s massive sting operation that led to the bust of hundreds of people, disrupting a drug gang network.  At the same time, leading legal scholars have highlighted that the sixth amendment of the US Constitution (amongst others) has been gutted with regard to electronic evidence, because the courts in America have said that private entities cannot be compelled to produce their source or methods, even when those entities are used by law enforcement.  In one case, a conviction stood, even though the police contracted the software and then couldn’t produce it.

By my score, then, many don’t like the tools law enforcement doesn’t have, and many don’t like the tools law enforcement does have.  Seems like the basis for a healthy dialog.

Friend and colleague John Levine pointed out that people aren’t having dialog but are talking past each other, and concluding the other side is being unreasonable because of “some fundamental incompatible assumptions”. You can read his entire commentary here.

I agree, and it may well be due to some fundamental incompatible assumptions, as John described.    I have said in the past that engineers make lousy politicians and politicians make lousy engineers.  Put in a less pejorative form, the generalization of that statement is that people are expert in their own disciplines, and inexpert elsewhere.  We have seen politicians playing the role of doctors too, and they don’t do a good job there either; but the US is in a mess because most doctors aren’t political animals.  And don’t get me started on engineers, given the recent string of legislation around encryption in places like Australia and the UK.

John added:

It’s not like we haven’t tried to explain this, but the people who believe in the wiretap model believe in it very strongly, leading them to tell us to nerd harder until we make it work their way, which of course we cannot.

This relates to a concern that I have heard, that some politicians want the issue and not the solution. That may well be true.  But in the meantime, FaceBook and Google have indeed found ways to reduce CSAM on their platforms; and it seems to me that Apple has come up with an innovative approach to do the same, while still encrypting communications and data at rest.  They have all “nerded harder”, trying to strike a balance between the individual’s privacy and other hazards such as CSAM (amongst other problems).  Good for them!

Is there a risk with the Apple approach?  Potentially, but it is not as John described, that we are one disaffected clerk away from catastrophe.  What I think we heard from at least some corners wasn’t that, but rather a slippery slope argument in which Apple’s willingness to prevent CSAM might be exploited to limit political speech; and (2) that the approach will be gotten around through double encryption.

I have some sympathy for both arguments, but even if we add the catastrophe theory back into the mix, the fundamental question I asked some time ago remains: who gets to judge all of these risks and decide?  The tech companies?  A government?  Multiple governments?  Citizens?  Consumers?

The other question is whether some standard (a’la the 6th Amendment) should be in play prior to anyone giving up any information.  To that I would only say that government exists as a compact, and that foundational documents such as the Constitution must serve the practical needs of society, and that includes both law enforcement and preventing governmental abuse. If the compact of the 18th century can’t be held, what does a compact of the 21st century look like?

Yet more research and yet more dialogue is required.


One thought on “Cyber-policing again: where is the social compact?”

  1. Standards and contracts are very nice (there are so many to choose from) but need a context in which to understand and interpret them. I see widespread disagreement even for that. As experience with emergent behaviors shows, it could even be impossible to come up with a set to give/encourage a desired outcome, assuming you can get agreement on that.
    I keep looking for some framework to come up set of mutually agreeable desired outcomes even.

Leave a Reply

Your email address will not be published. Required fields are marked *