Learning from the Dyn attack: What are the right questions to ask?

The attack on DNS provider DYN’s infrastructure that took down a number of web sites is now old news.  While not all the facts are public, the press reports that once again, IoT devices played a significant role.  Whether that it is true or not, it is a foregone conclusion that until we address security of these devices, such attacks will recur.  We all get at least two swings at this problem: we can address the attacks from Things as they happen and we can work to keep Things secure in the first place.

What systems do we need to look at?

  • End nodes (Cameras, DVRs, Refrigerators, etc);
  • Home and edge firewall systems;
  • Provider network security systems;
  • Provider peering edge routers; and
  • Infrastructure service providers (like DYN)

In addition, researchers, educators, consumers and governments all have a role to play.

Roles of IoT

What do the providers of each of those systems need to do? 

What follows is a start at the answer to that question.

Endpoints

It’s easy to pin all the blame on the endpoint developers, but doing so won’t buy so much as a cup of coffee. Still, thing developers need to do a few things:

  • Use secure design and implementation practices, such as not hardcoding passwords or leaving extra services enabled;
  • Have a means to securely update their systems when a vulnerability is discovered;
  • Provide network enforcement systems Manufacturer Usage Descriptions so that the networks can enforce policies around how a device was designed to operate.

Home and edge firewall systems

There are some attacks that only the network can stop, and there are some attacks that the network can impede.  Authenticating and authorizing devices is critical.  Also, edge systems should be quite leery of devices that simply self-assert what sort of protection they require, because a hacked device can make such self-assertions just as easily as a healthy device.  Hacked devices have recently been taking advantage of a gaming mechanism in many home routers known as Universal Plug and Play (uPnP) which permits precisely the sorts of self-assertions should be avoided.

Provider network security systems

Providers need to be aware of what is going on in their network.  Defense in depth demands that they observe their own networks in search of malicious behavior, and provide appropriate mitigations.  Although there are some good tools out there from companies like Cisco such as Netflow and OpenDNS, this is still a pretty tall order.  Just examining traffic can be capital-intensive, but then understanding what is actually going on often requires experts, and that can get expensive.

Provider peering edge routers

The routing system of the Internet can be hijacked.  It’s important that service providers take steps to prevent that from happening.  A number of standards have been developed, but service providers have been slow to implement for one reason or another.  It helps to understand the source of attacks.  Implementing filtering mechanisms makes it possible for service providers to establish accountability for the sources of attack traffic.

Infrastructure providers

Infrastructure upon which other Internet systems rely needs to be robust in the face of attack.  DYN knows this.  The attack succeeded anyway.  Today, I have little advice other than to understand each attack and do what one can to mitigate it the next time.

Consumers

History has shown that people in their homes cannot be made to do much to protect themselves in a timely manner.  Is it reasonable, for instance, to insist that a consumer to spend money to replace an old system that is known to have vulnerabilities?  The answer may be that it depends just how old that system really is.  And this leads to our last category…

Governments

The U.S. CapitolGovernments are already involved in cybersecurity.  The question really is how involved with they get with IoT security.  If the people who need to do things aren’t doing them, either we have the wrong incentive model and need to find the right one, or it is likely that governments will get heavily involved.  It’s important that not happen until the technical community has some understanding as to the answers of these questions, and that may take some time.

And so we have our work cut out for us.  It’s brow furrowing time.  As I wrote above, this was just a start, and it’s my start at that.  What other questions need answering, and what are the answers?

Your turn.



Photo credits:
Capitol by Deror Avi – Own work, CC BY-SA 3.0
Router by Weihao.chiu from zh, CC BY-SA 3.0
DVR by Kabel Deutschland, CC BY 3.0
Router by Cisco systems – CC BY-SA 1.0

iPhone TouchID doesn’t protect you from the government

FingerprintIt’s a common belief that Apple has gone to extraordinary lengths to protect individuals’ privacy through mechanisms such as Touch ID, but what are its limits?  Today Forbes reported that a U.S. attorney was able to get a warrant for the fingerprints of everyone at a particular residence for the express purpose of unlocking iPhones.

Putting aside the shocking breadth of the warrant, suppose you want to resist granting access to an iPhone.  It is not that hard for someone to force your finger onto a phone.  It is quite a different matter for someone to force a password out of your head.  Apple has gone to some lengths to limit certain forms of attack.  For instance, the Touch ID generally will not authenticate a severed finger, nor will it authenticate a fingerprint copy.  Also, Apple doesn’t actually store fingerprint images, but rather hashes of the information used to collect fingerprints.  Note that if the hashing method is known, then the hash itself is sensitive.

For those who care, the question is what length someone is likely to go to gain access to a phone.  Were someone holding a gun to my head and demanding access to my phone, unless it meant harming my family, I’d probably give them the information they wanted.  Short of that, however, I might resist, at least long enough to get to have my day in court.  If that would be your approach, then you might want to skip Touch ID, lest someone simply gets rough with you to get your fingerprint.  The problem is that Touch ID cannot currently be required  in combination with a pass code on iPhones and iPads.  Either suffices.  And this goes against the a basic concept of two-factor authentication.  Combine something you have, like a fingerprint, with something you know, like a pass code.

Home wireless security challenges for Things

It’s hard – but not impossible – for Things to connect to a home network in some sort of automated fashion.

WifiWhat’s the right way to connect a Thing to your home network?  Way back in the good old days, say last year, in order to connect a device to your home network, you could do it easily enough because the system had a display and a touch screen or a keyboard.  With many Things, there is no display and there is no keyboard, and some of the devices we are connecting may themselves not be that accessible to the home owner.  Think attic fans or even some light bulbs.  A means is needed first to tell these devices which network is the correct network to join, and then what the credentials for that network are.  In order to do any of this, there needs to be a way for the home router to communicate with the device in a secure and confidential way.  That means that each end requires some secret.  Public key cryptography is perfect for this, and it is how things would work in the enterprise.

WPA2 Enterprise makes use of individual keys and a flexible means to authenticate individuals and devices.  It looks a little like this:

EAP over Radius

EAP stands for Extensible Access Protocol, and it is just that.  There are many different authentication mechanisms available with EAP.   One method called EAP-TLS calls for both sides of the communication to transmit a certificate in an authentication transaction that contains their identities as certified by someone.  Initially, a device may be certified by its manufacturer, but then later it would use a certificate that is certified by the local network system.

A QR code

One challenge is getting the device certificate to be known by the network. One simple method to do this is to have an application tied to a camera that scans a QR code that points to a URL containing a signed copy of the device’s identity or certificate.  For instance, the QR code to the right encodes this URL:

https://www.ofcourseimright.com/qr/2834298343404739274639374630463934

which in turn gets you a certificate.  The next challenge is whether the device should trust the network. In the enterprise, there is a new approach being developed  known as Bootstrapping Remote Secure Key Infrastructures (BRSKI) (sometimes pronounced “brewski”).  In this case the manufacturer tells the device that the network is the correct one to join by essentially providing the device the network’s operational trust anchor.  This allows the device to validate the network’s certificate.

That’s something of a tall order even in the enterprise, but one that is worth aiming for.  If the home can leverage a service offered either by a service provider or by a new fangled home router company, if THEY can authenticate the home, and the manufacturer can authenticate them, then we have ourselves a ball game.  More work needed to get all the elements in place.

Let’s not blame Yahoo! for a difficult policy problem

Yahoo!Many in the tech community are upset over reports from The New York Times and others that Yahoo! responded to an order issued by the Foreign Intelligence Surveillance Act Court (FISC) to search across their entire account base a specific “signatures” of people believed to be terrorists.

It is not clear what capabilities Yahoo! already has, but it would not be unreasonable to expect them to have the ability to scan incoming messages for spam and malware, for instance.  What’s more, we are all the better for this sort of capability.  Consider that around 85% of all email is spam, a small amount of which contains malware, and Yahoo! users don’t see most of that.  Much of that can be rejected without Yahoo! having to look at the content by just examining the source IP address of the device attempting to send Yahoo! mail, but in all likelihood they do look at some, as many systems do.  In fact one of the most popular open source systems in the early days known as SpamAssassin did just this.  The challenge from a technical perspective is to implement such a mechanism without the mechanism itself having a large target surface.

If the government asking for certain messages sounds creepy, we have to ask what a signature is.  A signature normally refers to characteristics of a communication that would either identify its source or that it has some quality.  For instance, viruses all have signatures.  In this case, what is claimed is that terrorists communicated in a certain way such that they could be identified.  According to The Times, the government demonstrated probably cause that this was true, and that the signature was “highly unique”*.  That is, the signature likely matches very few actual messages that the government would see, although we don’t know how small that number really is.  Yahoo! has denied having a capability to scan across all messages in their system, but beyond that not enough is public to know what they would have done.  It may well not have been reasonable to search specific accounts because one can easily create an account, and the terrorists may have many.  The government publicly revealing either the probable cause or the signature would tantamount to alerting terrorists that they are in fact investigation, and that they can be tracked.

The risk to civil liberties is that there are no terrorists at all, and this is just a fishing expedition, or worse, persecution of some form.  The FISC and its appellate courts are intended to provide some level of protection against abuse, but in all other cases, the public as a view to whether that abuse is actually occurring.  Many have complained about a lack of transparent oversight of the FISC, but the question is how to have that oversight without alerting The Bad Guys.

The situation gets more complex if one considers that other countries would want the same right to demand information from their mail service providers that the U.S. enjoys, as Yahoo’s own transparency report demonstrates.

In short we are left with a set of difficult compromises that pit gathering of intelligence on terrorists and other criminals against the risk of government abuse.  That’s not Yahoo!’s fault.  This is a hard problem that requires thoughtful consideration of these trade offs, and the timing is right to think about this.  Once again, the Foreign Intelligence Surveillance Act (FISA) will be up for reauthorization in Congress next year.  And in this case, let’s at least consider the possibility that the government is trying to fulfill its responsibility of protecting its citizens and residents, and Yahoo! is trying to be a good citizen in looking at each individual request on its merits and in accordance with relevant laws.


* No I don’t know the difference between “unique” and “highly unique” either.

How hard is it to secure a baby monitor?

Philips In.Sight B120/37Parents often seek the security of a baby monitor to know that their child is resting comfortably.  Unfortunately that security is often misplaced.  Last year Rapid7 produced a damning report, exposing numerous vulnerabilities in these devices.  As an example, the Philips In.Sight B120/37 made use of a fixed password over an insecure telnet or web service that resides on TCP port 8080.

Don AdamsThe thing is- the In.Sight came very close to getting right, or as the great Maxwell Smart would say, “Missed it by that much!”  That’s because Philips also offers a cloud-based service that would not otherwise require the device to listen to any TCP port.  That’s a good way to go because it is harder to probe the device for vulnerabilities.

One good reason to offer a local service is that some some people do not trust cloud services, and they particularly do not trust cloud services involving images of their children.  Indeed this makes for a very difficult choice, because that same Rapid7 report notes problems with some cloud based services, and so parents wouldn’t be wrong to worry.

Either way, I’ve built a MUD file using MudFileMaker.

A brief view of the application alongside tcpdump together with a quick view of the server binary seems to indicate that cloud communications are to api.ivideon.com.  We can thus come up with an appropriate MUD file as follows:

{
  "ietf-mud:meta-info": {
    "lastUpdate": "2016-10-03T12:56:08+02:00",
    "systeminfo": "Philips In.Sight B120/37 Baby Monitor",
    "cacheValidity": 1440
  },
  "ietf-acl:access-lists": {
    "ietf-acl:access-list": [
      {
        "acl-name": "mud-94344-v4in",
        "acl-type": "ipv4-acl",
        "ietf-mud:packet-direction": "to-device",
        "access-list-entries": {
          "ace": [
            {
              "rule-name": "clout0-in",
              "matches": {
                "ietf-acldns:src-dnsname": "api.ivideon.com",
                "protocol": 6,
                "source-port-range": {
                  "lower-port": 443,
                  "upper-port": 443
                }
              },
              "actions": {
                "permit": [
                  null
                ]
              }
            },
            {
              "rule-name": "entin0-in",
              "matches": {
                "ietf-mud:controller": "http://ivideon.com/babymonitors",
                "protocol": 6,
                "source-port-range": {
                  "lower-port": 8080,
                  "upper-port": 8080
                }
              },
              "actions": {
                "permit": [
                  null
                ]
              }
            }
          ]
        }
      },
      {
        "acl-name": "mud-94344-v4out",
        "acl-type": "ipv4-acl",
        "ietf-mud:packet-direction": "from-device",
        "access-list-entries": {
          "ace": [
            {
              "rule-name": "clout0-in",
              "matches": {
                "ietf-acldns:src-dnsname": "api.ivideon.com",
                "protocol": 6,
                "source-port-range": {
                  "lower-port": 443,
                  "upper-port": 443
                }
              },
              "actions": {
                "permit": [
                  null
                ]
              }
            },
            {
              "rule-name": "entin0-in",
              "matches": {
                "ietf-mud:controller": "http://ivideon.com/babymonitors",
                "protocol": 6,
                "source-port-range": {
                  "lower-port": 8080,
                  "upper-port": 8080
                }
              },
              "actions": {
                "permit": [
                  null
                ]
              }
            }
          ]
        }
      }
    ]
  }
}

Remember, the router needs to fill out which devices are authorized to be in class http://ivideon.com/babymonitors.  Note the use of incoming tcp port 8080.  It is possible at least for the server software run on another port if the configuration is changed.  At that moment, the above MUD file would be too restrictive, and the device would not function.  To fix that, one would simply remove the TCP port filter.

Again, note that only authorized communications are listed in the file, and so just because the developer left a telnet server in place doesn’t mean that just anyone would be able to access it.  This serves as a means to confirm the intentions of the developers.  Of course developers should never leave back doors, but if they do, perhaps MUD can reduce their impact, and let parents rest just a little easier.