Cyber-policing again: where is the social compact?

Private companies are making public policy, with no societal agreement on what powers governments should and should not have to address cybercrime.

A few of us have been having a rather public discussion about who should be policing the Internet and how. This began with someone saying that he had a good conversation with a mature law enforcement official who was not himself troubled by data encryption in the context of Child Sexual Abuse Material (CSAM) on the Internet.

I have no doubt about the professionalism of the officer or his colleagues.  It is dogma in our community that child online protection is a crutch upon which policy makers and senior members of the law enforcement agencies rest, and we certainly have seen grandstanding by those who say, “protect the children”.  But that doesn’t mean there isn’t a problem.

Perhaps in that same time frame you may have seen this report by Michael Keller and Gabriel Dance in the New York Times.  That would be 45 million images, 12 million reports of which were at the time passing through FB messenger.  Those were the numbers in 2019, and they were exploding then.  In some cases these images were hiding in plain sight.  Is 45 million a large number?  Who gets to say?

Law enforcement will use the tools they have. 

We have also seen people object to June’s massive sting operation that led to the bust of hundreds of people, disrupting a drug gang network.  At the same time, leading legal scholars have highlighted that the sixth amendment of the US Constitution (amongst others) has been gutted with regard to electronic evidence, because the courts in America have said that private entities cannot be compelled to produce their source or methods, even when those entities are used by law enforcement.  In one case, a conviction stood, even though the police contracted the software and then couldn’t produce it.

By my score, then, many don’t like the tools law enforcement doesn’t have, and many don’t like the tools law enforcement does have.  Seems like the basis for a healthy dialog.

Friend and colleague John Levine pointed out that people aren’t having dialog but are talking past each other, and concluding the other side is being unreasonable because of “some fundamental incompatible assumptions”. You can read his entire commentary here.

I agree, and it may well be due to some fundamental incompatible assumptions, as John described.    I have said in the past that engineers make lousy politicians and politicians make lousy engineers.  Put in a less pejorative form, the generalization of that statement is that people are expert in their own disciplines, and inexpert elsewhere.  We have seen politicians playing the role of doctors too, and they don’t do a good job there either; but the US is in a mess because most doctors aren’t political animals.  And don’t get me started on engineers, given the recent string of legislation around encryption in places like Australia and the UK.

John added:

It’s not like we haven’t tried to explain this, but the people who believe in the wiretap model believe in it very strongly, leading them to tell us to nerd harder until we make it work their way, which of course we cannot.

This relates to a concern that I have heard, that some politicians want the issue and not the solution. That may well be true.  But in the meantime, FaceBook and Google have indeed found ways to reduce CSAM on their platforms; and it seems to me that Apple has come up with an innovative approach to do the same, while still encrypting communications and data at rest.  They have all “nerded harder”, trying to strike a balance between the individual’s privacy and other hazards such as CSAM (amongst other problems).  Good for them!

Is there a risk with the Apple approach?  Potentially, but it is not as John described, that we are one disaffected clerk away from catastrophe.  What I think we heard from at least some corners wasn’t that, but rather a slippery slope argument in which Apple’s willingness to prevent CSAM might be exploited to limit political speech; and (2) that the approach will be gotten around through double encryption.

I have some sympathy for both arguments, but even if we add the catastrophe theory back into the mix, the fundamental question I asked some time ago remains: who gets to judge all of these risks and decide?  The tech companies?  A government?  Multiple governments?  Citizens?  Consumers?

The other question is whether some standard (a’la the 6th Amendment) should be in play prior to anyone giving up any information.  To that I would only say that government exists as a compact, and that foundational documents such as the Constitution must serve the practical needs of society, and that includes both law enforcement and preventing governmental abuse. If the compact of the 18th century can’t be held, what does a compact of the 21st century look like?

Yet more research and yet more dialogue is required.


When does safe and productive use of cryptography cross over to cryptophilia?

Encryption makes the Internet possible, but there are some controversial and other downright stupid uses for which we all pay.

Imagine someone creating or supporting a technology that consumes vast amounts of energy only to produce nothing of intrinsic value and being proud of that of that fact. Such is the mentality of Bitcoin supporters. As the Financial Times reported several days ago, Bitcoin mining, the process by which this electronic fools’ gold is “discovered”, takes up as much power as a small country. And for what?

Cambridge University Bitcoin Electricity Consumption Index shows that bitcoin mining consumes more energy than entire countries
Cambridge University Bitcoin Electricity Consumption Index

The euro, yen, and dollar are all tied to the fortunes and monetary policies of societies as represented by various governments. Those currencies are all governed by rules of their societies. Bitcoin is an attempt to strip away those controls. Some simply see cryptocurrencies as a means to disrupt the existing banking system, in order to nab a bit of the financial sector’s revenue. If so, right now they’re not succeeding.

In fact nothing about cryptocurrency is succeeding, while people waste a tremendous amount of resources. Bitcoin has been an empty speculative commodity and a vehicle for criminals to receive ransoms and other fees, as happened recently when the Colonial Pipeline paid a massive $4.4 million to DarkSide, a gang of cyber criminals.

What makes this currency attractive to hackers is that otherwise intelligent people purchase and promote the pseudo-currency. Elon Musk’s abrupt entrance and exit (that some might call Pump and Dump), demonstrates how fleeting that value may be.

Bitcoin is nothing more than an expression of what some would call crypto-governance, a belief that somehow technology is above it all and somehow is its own intrinsic benefit to some vague society. I call it cryptophilia: an unnatural and irrational love of all things cryptography, in an attempt to defend against some government, somewhere.

Cryptography As a Societal Benefit

Let’s be clear: Without encryption there could be no Internet. That’s because it would simply be too easy for criminals to steal information. And as is discussed below, we have no shortage of criminals. Today, thanks to efforts by people like letencrypt.org, the majority of traffic on the Internet is encrypted, and by and large this is a good thing.

This journey took decades, and it is by no means complete.

Some see encryption as a means by those in societies who lack basic freedoms as a means to express themselves. The argument goes that in free societies, governments are not meant to police our speech or our associations, and so they should have no problem with the fact that we choose to do so out of their ear shot, the implication being that governments themselves are the greatest threat to people.

Distilling Harm and Benefit

Bitcoin is an egregious example of how this can go very wrong. A more complicated case to study is the Tor network, which obscures endpoints through a mechanism known as onion routing. The proponents of Tor claim that it protects privacy and enables human rights. Critics find that Tor is used for illicit activity. Both may be right.

Back in 2016, Matthew Prince, the CEO of Cloudflare reported that, “Based on data across the CloudFlare network, 94% of requests that we see across the Tor network are per se malicious.” He went on to highly that a large portion of spam originated in some way from the Tor network.

One recent study by Eric Jardine and colleagues has shown that some 6.7% of all ToR requests are likely malicious activity. The study also asserts that so-called “free” countries are bearing the brunt of the cost of Tor, both in terms of infrastructure and crime. The Center for Strategic Studies quantifies the cost at $945 billion, annually, with the losses having accelerated by 50% over two years. The Tor network is key enabling technology for the criminals who are driving those costs, as the Colonial Pipeline attack so dramatically demonstrated.

Visualization of TOR network, showing packets flowing largely between Europe and the US.
Torflow visualization of the Tor network (2016)

Each dot on the diagram above demonstrates a waste of resources, as packets make traversals to mask their source. Each packet may be routed and rerouted numerous times. What’s interesting to note is how dark Asia, Africa, and South America were.

Wall Street dark web market arrests in Europe and the US

While things have improved somewhat since 2016, bandwidth in many of these regions still comes at a premium. This is consistent with Jardine’s study. Miscreants such as DarkSide are in those dots, but so too are those who are seeking anonymity for what you might think are legitimate reasons.

One might think that individuals have not been prosecuted for using encrypted technologies, but governments have been successful in infiltrating some parts of the so-called dark web. A recent takedown of a child porn ring followed a large drug bust last year by breaking into Tor network sites is enlightening. First, one wonders how many other criminal enterprises haven’t been discovered. As important, if governments we like can do this, so can others. The European Commission recently funded several rounds of research into distributed trust models. Governance was barely a topic.

Other Forms of Cryptophilia: Oblivious HTTP

A new proposal known as Oblivious HTTP has appeared at the IETF that would have proxies forward encrypted requests to web servers, with the idea of obscuring traceable information about the requestor.

The flow diagram for Obvlivious HTTP shows a client talking through a proxy to a request resource to the target resource.
Oblivious HTTP, from draft-thomson-http-oblivious-01

This will work with simple requests a’la DNS over HTTP, but as the authors note, there are several challenges. The first is that HTTP header information, which would be lost as part of this transaction, actually facilitates the smooth use o the web. This is particularly true with those evil cookies about which we hear so much. Thus any sort of session information would have to be re-created in the encrypted web content, or worse, in the URL itself.

Next, there is a key discovery problem: if one is encrypting end-to-end, one needs to have the correct key for the other end. If one allows for the possibility of receiving such information using non-oblivious methods to the desired web site, then it is possible to obscure the traffic in the future. But then an interloper may know at least that the site was visited once.

The other challenge is that there is no point of obscuring the information if the proxy itself cannot be trusted, and it doesn’t run for free: someone has to pay its bills. This brings us back to Jardine, and who is paying for all of this.

Does encryption actually improve freedom?

Perhaps the best measure of whether encryption has improved freedoms can be found in the place with the biggest barrier to those freedoms on the Internet: China. China is one of the least free countries in the world, according to Freedom House.

Snapshot from Freedom House shows China toward the bottom in terms of Freedoms
From Freedom House

Another view of the same information comes from Global Partners Digital:

Much of Asia has substantial restrictions on encryption.
Freedom to use encryption: not all countries are assessed.

Paradoxically, one might answer the question that freedom and encryption seem to go hand in glove, at least to a certain point. However, the causal effects seem to indicate that encryption is an outgrowth of freedom, and not the other way around. China blocks the use of Tor, as it does many sites through its Great Firewall, and there has been no lasting documented example that demonstrates that tools such as Tor have had a lasting positive impact.

On the other hand, to demonstrate how complex the situation is, and why Jardine’s (and everyone else’s) work is so speculative, it’s not like dissidents and marginalized people are going to stand up for a survey, and say, “Yes, here I am, and I’m subverting my own government’s policies.”

Oppression as a Service (OaaS)

Cryptophiliacs believe that they can ultimately beat out, or at least stay ahead of the authorities, whereas China has shown its great firewall to be fully capable of adapting to new technologies over time. China and others might also employ another tactic: persisting meta-information for long periods of time, until flaws in privacy-enhancing technology can be found.

This gives rise to a nefarious opportunity: Oppression as a Service. Just as good companies will often test out new technology in their own environments, and then sell it to others, so too could a country with a lot of experience at blocking or monitoring traffic. The price they charge might well depend on their aims. If profit is pure motive, some countries might balk at the price. But if ideology is the aim, common interest could be found.

For China, this could be a mere extension of its Belt and Road initiative. Cryptography does not stop oppression. But it may – paradoxically – stop some communication, as our current several Internets continue to fragment into the multiple Internets that former Google CEO Eric Schmidt raised in 2018 thought he was predicting (he was really observing).

Could the individual seeking to have a private conversation with a relative or partner fly under the radar of all of this state mechanism? Perhaps for now. VPN services for visitors to China thrive; but those same services are generally not available to Chinese residents, and the risks of being caught using them may far outweigh the benefits.

Re-establishing Trust: A Government Role?

In the meantime, cyber-losses continue to mount. Like any other technology, the genie is out of the bottle with encryption. But should services that make use of it be encouraged? When does its measurable utility become more a fetish?

By relying on cryptography we may be letting ourselves and others off the hook for their poor behavior. When a technical approach to enable free speech and privacy exists, who says to a miscreant country, “Don’t abuse your citizens”? At what point do we say that, regardless, and at what point do democracies not only take responsibility for their own governments’ bad behavior, but also press totalitarian regimes to protect their citizens?

The answer may lie in the trust models that underpin cryptography. It is not enough to encrypt traffic. If you do so, but don’t know who you are dealing with on the other end, all you have done is limited your exposure to that other end. But trusting that other end requires common norms to be set and enforced. Will you buy your medicines from just anyone? And if you do and they turn out to be poisons, what is your redress? You have none if you cannot establish rules of the Internet road. In other words, governance.

Maybe It’s On Us

Absent the sort of very intrusive government regulation that China imposes, the one argument that cryptophiliacs have in their pocket that may be difficult for anyone to surmount is the idea that, with the right tools, the individual gets to decide this issue, and not any form of collective. That’s no form of governance. At that point we had better all be cryptophiliacs.

We as individuals have a responsibility to decide the impact of our decisions. If buying a bitcoin is going to encourage more waste and prop up criminals, maybe we had best not. That’s the easy call. The hard call is how we support human rights while at the same time being able to stop attacks on our infrastructure, where people can die as a result, but for different reasons.


Editorial note: I had initially misspelled cryptophilia. Thanks to Elizabeth Zwicky for pointing out this mistake.

Who has access to that smart home you’re buying?

You got the keys to the house, but someone else may have the keys to all of the systems inside the house, including the door locks.

You’ve just moved into a lovely house. It has these wonderful smart lights, with a wonderful smart oven, fancy smart thermostats, with a smart refrigerator, smart locks, and a smart security system. There’s only one problem: not only do you not have all that fancy access for your apps, but perhaps the old owner still does, and he didn’t leave willingly, something you don’t know. Sounds crazy? We sure have come a long way from just getting the keys and the garage door openers, and one cannot just call a locksmith.

Philips Hue Bridge
Philips Hue Bridge

Many – but not all – IoT-enabled devices have some form of factory reset capability. Often, this amounts to inserting a paperclip into a pinhole and holding it for 10 seconds or so, but as we’ll see the procedure varies by device type, and it is not possible for some devices. Your stove is unlikely to have anything to stick a metal object in, for instance, nor will outdoor lights. In these cases, there will generally be some vendor instructions. In the case of Philips Hues, the only available reset option is to reset the bridge that is used to communicate with the lights. If the bridge is fastened to the wall, as demonstrated in the picture, this means removing it first. This, by the way, requires not only that the bridge be re-paired with the lights and with your app, but that all configuration for the lights be re-established.

Yale Assure Lever Lock
Yale Assure Lever

What about smart locks? Clearly one of the highest priorities upon taking possession of a home is to control who can enter. If you are leasing a home, some smart locks have master codes that the landlord will set and maintain. In this case, all is “good” (assuming you don’t mind your landlord having access) unless the landlord loses the code. If you bought your dwelling, or if the landlord did lose the code, what to do? Again, this will vary by vendor. For example, here are the instructions for the Yale Assure Lever (YRD256):

  1. Remove battery cover and batteries.
  2. Remove the interior escutcheon to access the reset button.
  3. Locate the white reset button near the PCB cable connector.
  4. Press and hold the reset button for a minimum of three (3) seconds while simultaneously replacing the batteries.
  5. Once batteries are replaced, release the reset button.
  6. Reassemble the lock.

You might be wondering what an escutcheon is. According to Google, it’s a flat piece of metal for protection and often ornamentation, around a keyhole, door handle, or light switch.

SKS Double Oven
SKS Double Oven

Next, let’s have a look at the oven. Let’s say that you have a Signature Kitchen Suite Double Wall Oven such as the one pictured to the left. According to the instructions, all it says is… follow the app instructions. You better hope there are some, or a service call to SKS will be in order. By the way, one might reasonably ask what could happen if you don’t reset this device? First, the device itself won’t be able to receive security updates, assuming the company issues any to begin with. This means the oven could be vulnerable to attack. If the oven app was used by the previous owner, then the chances are, it has joined and would be looking for the old Wifi network. But we really can’t say, because there’s no clear documentation. This holds true for many “smart” devices.

Genie StealthDrive 750 Plus
Genie StealthDrive 750 Plus

Oh and then there’s that garage door. Here’s the Genie StealthDrive 750 Plus, featuring what they call Aladdin Connect. Their stated “advantage” is that you can “Control and monitor the status of your garage door from anywhere with your smart device.” Or the previous owner can. Or your ex-husband can. The good news is that garage door manufacturers have been in business for a long time, and understand the need to deal with lost or misplaced remotes. The bad news is that they haven’t been in the Internet security business for very long, and there are indeed no instructions on how to reset Aladdin Connect, other than to unplug it.

Oh dear.

How does one take possession of that house?!

While it is impossible to provide a comprehensive guide about all smart devices, here are here are some guidelines that will help.

First, learn about what IoT devices are in the house prior to entering a contract, or by including full disclosure and assistance as a contingency of sale. Having documentation and a customer support number available will help to assess what effort is required to shift control from the old owner to you. The simplest case may be for the old owner to transfer control to you in whatever application controls the smart appliance. Otherwise, a reset will be required.

You might want to use a simple table along the lines of the following to assist.

SystemIoT Enabled?Manual located?Known how to reset?Customer Service contact Handoff Complete
Smart Locks
Door Bell
Climate Control
Garage Door
Lighting
Oven
Fridge
Sprinkers
Smart device handover checklist

It may not be possible to reset certain devices, as we discussed. In this case, what is important is that you read the documentation and understand when you have received the necessary supervisory access. You should be able to understand who has control and who doesn’t. If there are passwords involved, you should be change them. If there is a list of authorized users, you should be able to view them and disable the ones you don’t know. If you can’t perform these features, it may cost money to correct the situation. You should know about that cost in advance.

Is all of this Smart Stuff worth it?

While it may help to think about what benefit you will gain by having smart appliances in the house, increasingly the choice may no longer be yours, as IoT capabilities diffuse through the industry. If you are moving into a place, you don’t want to have to worry about who has control of the door locks. If you are installing door locks, you may want to think twice about the headaches that may occur when you move out. Whatever you do, keep all manuals! They will be needed later.

I should point out that the vendors I named in this post are not bad vendors, but in all likelihood representative of where the market is today. Few vendors are likely to do better than them.

Is there hope for the future?

Yes. Smart home device capabilities are still evolving. Just like we had universal remote controls for televisions in the 1980s, at least some access control functions are likely to be aggregated into one or two control systems. The reason this is likely is that no manufacturer really ever wants to hear from you, because phone calls have to be answered by people whose salary takes away from their profits. This means that incentives are likely aligned for manufacturers to cooperate on standards to facilitate handover.

The Challenges of CISOs

Are CISOs investing enough in protection? Do they have good visibility to threats?

Image
Aub Persian Zam Zam

Long ago there used to be a bar on Haight St. called Aub Persian Zam Zam, run by a cranky guy named Bruno. Bruno who hated everyone, and he preferred only to serve martinis.  If you walked in before 7:00pm, he told you that table service started at 8:00pm.  And if you walked in after 7:00pm, table service stopped at 6:00pm. As a customer, I felt a little like a Chief Information Security Officer (CISO). 

CISOs constantly face a challenge with their boards: how much to invest in security. If you haven’t been hacked, then you are accused of spending too much on protection (and might be out of a job); and if you have, then you spent too little (and might be out of a job).  But CISOs have to operate in the here and now. They don’t get to have the luxury of hindsight. What CISOs need is an appropriate level of investment to secure their charges and situational awareness to make good decisions.

Much is being made of the lax security that Solar Winds had. As Bruce Schneier pointed out in the New York Times, they had been hacked not just once, but several times. There was the attack on the company and then there was the attack on their customers. The attack on the customers involved the use of a DNS-based command and control (C&C) network, very stealthily crafted code, and the potential for an infected system to probe whatever was available to it at government and industrial installations across the globe. This may have been particularly damaging in the case of Solar Winds because the legitimate software could have stood in a privileged point within an enterprise, requiring access to lots of other core infrastructure. The Russians picked a really juicy target. They were, if you will, an incident waiting to happen, and happen it did. Solar Winds was detectable, but it required an appropriate investment in not only tooling but back-end expert services to provide situational awareness.

Not every target is quite so juicy. Most hackers hit web servers or laptops with various viruses. The soft underbelly of cybersecurity, however, are the control systems, who themselves have access to other infrastructure, as was demonstrated this past month, when a hacker attempted to poison a Florida city with lye. Assuming they have one, the Oldsmar CISO might have some explaining to do. How might that person do so, especially when it is the very system meant to protect the others? It starts by knowing how one compares to one’s peers in terms of expenditures. It’s possible to both under- and overspend.

Gordon Loeb Model

Optimal investment models for cybersecurity has been an ongoing area of research. The seminal Gordon-Loeb Model demonstrates a point of optimality and a point of diminishing returns for risk mitigation. The model doesn’t given you the shape of either curves. That was the next area of research.

For one, some things are easy to do, and some are hard; but the easy things are often not the right things to do. Low level cybersecurity professionals sometimes make the wrong choices, being risk seeking for big ticket items like device policy management, two-factor authentication, training, and auditing; while being risk adverse to matters that are within their control. Back in 2015, Armin Sarabi, Parinaz Naghizadeh, Yang Liu, and Mingyan Liu set out to answer this question. The table below liberally borrowed from their paper shows a risk analysis of different sectors.

Sarabi et al, Prioritizing Security Spending: A Quantitative Analysis of Risk Distributions for Different Business Profiles, Workshop on the Economics of Information Security, 2015.

What this says is that based on reports received, configuration errors were a substantial risk factor pretty much everywhere but accommodation and food services, but they suffered because employees share credentials. It was a limited survey, and surely the model has changed since then. In the intervening time, cloud computing has become far more prevalent, and we have seen numerous state actors take on a much bigger, and nastier, role. It’s useful, however, is for a CISO to have situational awareness of what sorts of common risks are being encountered, and to have some notion as to what best practices are to counter those risks, so that whatever a firm spends is effective.

Expenditures alone don’t guarantee against break-ins. Knowing one’s suppliers and their practices is also critical. Knowing that Verkada had sloppy practices would have both deterred some from using their cameras, and in turn encouraged that provider to clean up their act. Again, situational awareness matters.


Gordon Loeb Diagram by By Luca Rainieri – Own work, CC BY-SA 4.0

It’s Not the Doorbell, It’s the Cloud

Your password in the cloud was weak, not the IoT device this time. But there are emerging IoT standards like DPP that can help do away with passwords.

You have to have been hiding under a rock over the last week not to have heard about scare stories about kids being tormented by perverts and others being violently extorted through various Ring products. Not exactly what you were expecting from your security product, was it?

With so many reports of IoT devices being vulnerable to attack, one might leap to the idea that the Ring device itself has been poorly designed, and thus broken into, but one would be wrong. That is because, like so many IoT devices, Ring products make use of the cloud to offer a service. Here’s how it all works.

How you access that home IoT device

When you establish an account, you are doing this not on the doorbell, but on a service somewhere on the Internet to which the doorbell connects. This is evident, because when you go to ring.com, you can log in with the account that you have previously established in the app.

Later during device setup, the doorbell is registered with the service, using the phone’s setup app. This is likely the only time the phone would directly communicate with the doorbell. All other communications flow through the service, as drawn above.

So how did someone else get to control your device? If you are not using two factor authentication, an attacker requires two pieces of information to control your device: your email address and your password. Your email address can easily have appeared in public if you have joined a public mailing list, or had made a comment on a poorly designed web site. An attacker may also be able to guess your password if you have used that same password on a service that has been compromised (hint: many have), or the password itself is obvious.

Some recent research has found that long or complex passwords aren’t good because people write them down or forget them. On the other hand, Ring will accept “12345678” as a password, and quite a number of other commonly used passwords that can be found on this list of stupid passwords. First piece of advice in this article: don’t use those passwords!

Ring also offers the option to register a cell phone with your account, so that when you log in, you will receive a code via SMS that you must enter to access your account. This two factor authentication (or 2FA) is stronger, and well worth the mild inconvenience, given that this is your house and its security we are talking about.

All of this is about securing your online account. The only reason that the EvilBadDoer can bother Little Johnny and take over your doorbell or security camera, at least in this moment, is that EvilBadDoer hacked your online service password to the service controls the device.

Could this marriage of IoT devices and online services be used to provide a stronger authentication? Possibly. Because a device communicates with the cloud once it’s set up, and because your phone communicates with the cloud after the doorbell is setup, it is possible for the device to provide the doorbell a token. However, for that to work, communications must be secured between the device and the doorbell during setup. Earlier this year, researchers found that this was not the case, the reason being that the doorbell was simply using unencrypted HTTP to share information about your wifi network. Bad Ring! No Ring biscuit!

Luckily, there are some onboarding standards that Ring and others could leverage to help improve matters. One is EasyConnect by the Wifi Alliance, otherwise known as Device Provisioning Protocol (DPP). Here’s how DPP works:

Wifi Easy Connect

With DPP, you can use an app to scan a QR code printed on a label that came with the device that contains the public key that was installed during the manufacturing process. The app then looks for the device and authenticates using that key. Look, Ma! No passwords. DPP was primarily intended to be used for Wifi connectivity, but there’s no reason that the same trust couldn’t be leveraged to do away with Ring passwords. This is something that Amazon and others should consider.

There are some remaining challenges. For instance, what happens if you lose your phone? Can you repeat the exercise, and if you do so, would you have to do so with all the Ring devices in your house? To me this is best handled with some sort of backup before one loses one’s phone.

The key point here is that IoT can actually help itself if we adopt stronger onboarding technologies, like EasyConnect. This will take some time to get right. As a customer, you might want to ask about EasyConnect to help ease password problems so that Little Johnny can sleep easier.