Will New NY Banking Regulations Actually Tighten Cybersecurity?

Proposed New York banking regulations might not help that much.

New York is proposing new cybersecurity rules that would raise the bar for banks over which they have jurisdiction (wouldn’t that be just about all of them?).  On their face, the new regulations would seem to improve overall bank posture, but digging a bit deeper leads me to conclude that these regulations require a bit of work.

A few key new aspects of the new rules are as follows:

  1. Banks must perform annual risk assessments and penetration tests;
  2. New York’s Department of Financial Services (DFS) must be notified within 72 hours of an incident (there are currently numerous timeframes);
  3. Banks must use 2-factor authentication for employee access; and
  4. All non-public data must be encrypted, both in flight and at rest.

The first item on that list is what Chief Information Security Officers (CISOs) already get paid to do.  Risk assessment is in particular the most important task on this list, because as banks evolve their service offerings, they must ascertain both evolving threats and potential losses.  For example, as banks added iPhone apps, the risk of an iPhone being stolen became relevant, thus impacting app design.

Notification laws exist already in just about all jurisdictions.  The proposed banking regulation does not say what the regulator will do with the information or how it will be safeguarded.  A premature release can harm ongoing investigations.

Most modern banks outside the United States already use two-factor authentication for employee access, and many require two-factor authentication for customer access.

That last one is a big deal.  Encrypting data in flight (e.g., transmissions from one computer to another) protects against eavesdroppers.  At the same time, absent other controls, encryption can obscure data exfiltration (information theft). Banks currently have many tools that rely on certain transmissions being “in the clear”, and it may require some redesign of communication paths to address both the encryption in flight requirement and auditing needs.  Some information is simply impractical today to encrypt in flight.  This includes discovery protocols such as DHCP, name service exchanges (DNS), and certain other network functions.  To encrypt much of this information would require yet lower layer protection such as IEEE 802.1AE (MACSEC) hop-by-hop encryption.  The regulation is, again, vague on precisely what is necessary.  One thing is clear, however: their definition of non-public information is quite broad.

To meet the “data at rest” requirement banks will either have to employ low level disk encryption or higher level object-level encryption.  Low level encryption protects against someone stealing a disk or taking it from the trash and reading it, but provides very little protection against someone breaking into a computer when the disk is still spinning.  Moreover, banks generally have rules about crushing disks before they can leave a data center.  Requiring data at rest to be encrypted in data centers may not provide much risk mitigation.  While missing laptops have repeatedly been a source data breaches, how often has a missing data center disk caused a breach?

Object-level encryption, or the encryption of groups of information elements (think Email messages) can provide strong protection should devices be broken into.  Object-level encryption is particularly interesting because if done right, it can address both data in flight and data at rest.  The challenge with object-level encryption is that the tools for it are quite limited.  While there are some tools such as email message encryption, and while there are various ways one can use existing general purpose mechanisms such as OpenSSL to encrypt objects at rest, on object-level encryption remains a challenge because it must be implemented at the application level across all applications.  Banks may have tens of thousands of applications running at any one time.

This is an instance where the financial industry could be a technology leader.  However, all such development must be grounded in a proper risk assessment.  Otherwise we end up in a situation where banks will have expended enormous amounts of resources without having substantially improved security.

Looming wireless problems with IoT security

Security experts have two common laments:

  • Security is an afterthought, and
  • Security is hard to get right.

No place else has this been more true than in wireless security, where it took the better part of two decades to get us to where we are today.  “Wireless” can mean many different things.  It could mean 3G cellular service or Wifi or Bluetooth or something else.  In the context of Wifi, we have standards such as WPA Personal and WPA Enterprise that were developed at the IEEE.  Similarly, 3GPP has developed secure access standards for your phone through the use of a SIM card.  With either WPA Enterprise or 3G, you can bet that if your device starts to misbehave, it can be uniquely identified.

Unfortunately that’s not so much the case with other wireless standards, and in particular for IEEE’s 802.15.4, where security has for the time being been largely left to higher layers.  And that’s just fine if what we’re talking about is your Bluetooth keyboard.  But it’s not fine at all if we’re talking large number of devices, where one of them is misbehaving.

mesh-insecurity

Here we have a lighting network.  It might consist of many different light bulbs.  Maybe hundreds.  Now imagine a bad guy breaking into one of those devices and attacking the others.  Spot the bad guy.  In a wired world, assuming you have access to the switch, you can spot the device simply by looking at which port a connection came into.  But this is wireless, and mesh wireless at that.  In the case where each device has its own unique key, you can trace per session per device.  But if all devices use a shared key, you need to find other means.  A well hacked device isn’t going to give you many clues; it’s going to try to mimic a device that isn’t hacked, perhaps one that isn’t turned on or one that doesn’t even exist.

These attacks can be varied in nature.  If the mesh is connected to other networks, like enterprise networks, then attacks can be aimed at resources on those networks.  This might range from a form of a so-called “Snow Shoe” attack, where no one device generates a lot of traffic but the aggregate of hacked devices overwhelm a target, to something more destructive, like attempts to reconfigure critical infrastructure.

Some attacks aren’t even intended as such, as Raul Rojas discovered in 2009, when a single light bulb took down his IoT-enabled house.

What to do?

The most obvious thing to do is not to get into this situation in the first place.  From a traceability standpoint, network managers need to be able to identify the source of attacks.  Having unique wireless sessions between leaf and non-leaf nodes that are bound to source addresses is ideal.  Alternatively, all communications in a mesh could tunnel to non-leaf nodes that have strong diagnostic capabilities, like IPFIX and port spanning.  At that point administrators can at least log traffic to determine the source of attacks.  That’s a tall order for a light bulb, but it’s why companies like Cisco exist- to protect your infrastructure.

If none of these alternatives exist, poor network administrators (who might just be home owners like Mr. Rojas)  are forced into a position where they might need to consider the entire mesh a single misbehaving device, and disconnect it from the network.  And even that might not do the job: a smart piece of malware might notice and quiet itself until it can determine that the mesh has been re-connected.

Some careful thought is required as these capabilities develop.

Comey and Adult Conversations About Encryption

What does an adult conversation over encryption look like? To start we need to understand what Mr. Comey is seeking. Then we can talk about the risks.

AP and others are reporting that FBI director James Comey has asked for “an adult conversation about encryption.” As I’ve previously opined, we need just such a dialog between policy makers, the technical community, and the law enforcement community, so that the technical community has a clear understanding of what it is that investigators really want, and policy makers and law enforcement have a clear understanding of the limits of technology.  At the moment, however, it cannot be about give and take.  Just as no one cannot legislate that π = 3, no one can legislate that lawful intercept can be done in a perfectly secure way.  Mr. Comey’s comments do not quite seem to grasp that notion.  At the same time, some in the technical community do not want to give policy makers to even evaluate the risks for themselves.  We have recently seen stories of the government stockpiling malware kits.  This should not be too surprising, given that at the moment there are few alternatives to accomplish their goals (whatever they are).

So where to start?  It would be helpful to have from Mr. Comey and friends a concise statement as to what access they believe they need, and what problem they think they are solving with that access.  Throughout All of This, such a statement has been conspicuous in its absence.  In its place we have seen sweeping assertions about grand bargains involving the Fourth Amendment.  We need to be specific about what the actual demand from the LI community is before we can have those sorts of debates.  Does Mr. Comey want to be able to crack traffic on the wire?  Does he want access to end user devices?  Does he want access to data that has been encrypted in the cloud?  It would be helpful for him to clarify.

Once we have such a statement, the technical community can provide a view as to what the risks of various mechanisms to accomplish policy goals are.  We’ve assuredly been around the block on this a few times.  The law enforcement community will never obtain a perfect solution.  They may not need perfection.  So what’s good enough for them and what is safe enough for the Internet?  How can we implement such a mechanism in a global context?  And how would the mechanism be abused by adversaries?

The devil is assuredly in the details.

Here’s MUD in your eye! A way to protect Things on the Internet

How can the network protect so many types of things? We need for manufacturers to step up and tell us.

U.S. Army Pvt. Charles Shidler crawls through mudSince 2011 Cisco Systems has been forecasting that there will be at Since least 50 billion devices connected to the Internet by the year 2020.  Those are a lot of Things. but that’s not the number I’m worried about.  Consider this: Apple manages somewhere in the neighborhood of 1 billion active iOS devices on their own, and there are about 1.4 billion Android devices that are also managed, though less well.  Rather, it’s the number of types of things that people should be concerned about.  To begin with,not everyone is going to do such a great job at managing their products out in the field as Apple and Google do.  Moreover, even Apple and Google end support for different versions of their products after some period of time.

I call this the Internet of Threats.  Each and every one of those devices, including the device you are reading this note on right now, probably has a vulnerability that some hacker will exploit.

A good number of the manufacturers of those things will never provide fixes to their customers, and even those that do have very little expectation that the device will ever be updated.  Let’s put it this way: when was the last time you installed new software on your printer?  Probably never.

The convenient thing is that many Things probably only have a small set of uses.  A printer prints and maybe scans, thermostat like a Nest controls the temperature in your house, and a baby monitor monitors babies.  This is the exact opposite of the general purpose computing operating model that your laptop computer has, and we can take advantage of that fact.

If a Thing only has a small number of uses, then it aspirinprobably only communicates on the network in a small number of ways.  The people who know about those small number of ways are most likely the manufacturers of the devices themselves.  If this is the case, then what we need is a way for manufacturers to tell firewalls and other systems what those ways are, and what ways are particularly unsafe for a device.  This isn’t much different from a usage label that you get with medicine.

So what is needed to make all of this work?  Again, conveniently most of the components are already in your network. The first thing we need is a way for devices to tell the network where to get the manufacturer usage description file (or MUD file).  There’s an excellent example of that in your browser right now, called a Universal Resource Locator (URL), like https://www.ofcourseimright.com.  In our case, we need something a bit mroe structured, like https://www.example.com/.well-known/mud/v1/someproduct/version.  How you get that file, however, is exactly the same as how you got to this web page.

Next, we need a way for the Thing to give the URI to the network.  Once again, the technology is pretty much done.  Your device got an IP address today using Dynamic Host Configuration Protocol (DHCP), which provides an introduction between the device and the network.  All we need to do is add one new parameter or option so that the client can simply pass along this MUD URI.  There are even more secure ways of doing that using public key infrastructure (PKI) approaches such as IEEE’s 802.1AR format and 802.1X protocol.  The nice thing about using a manufacturer certificate in 802.1AR is that it is then the manufacturer and not the device itself that is asserting what the device communication patterns are.

Now, thanks to DHCP or IEEE 802.1X, the network can go get the MUD file.  What does that look like?  At the moment, <it> <looks> <like> <a> <bunch> of <XML>.  {“it” , [“may”, “look”, “more”], “like, {“json”}} in the future.  The good news here is that once again, we’re building on a bunch of work that is already complete.  The XML itself is structured using a data model called YANG.  So long as it conveys to the network what sort of protections a device needs, it could be anything, but YANG will do for now.

Finally, the basic enforcement building block is the access control function in a router or access point.  That function says what each device can communicate with, and they’ve been around since the earliest days of the Internet.

And that’s it.  So now if I have printer from HP and they make a MUD file available, they might tell my network that they only want to receive printer communications, and that the printer should only ever try to send certain types of unsolicited messages.  If anyone tries to contact the printer for another use, forget it.  If the printer tries to contact CNN – or more importantly random devices on my network, it’s probably been hacked and it will be blocked.  Google can do the same with a Nest.

We’re talking about this at the IETF and elsewhere.  What do you think?

The Internet of Everything: Everything will communicate with something!

Things will communicate to their manufacturers, and they need to do so to be secure.

A number of security researchers are getting upset by seeing home devices communicate with one another or with random sites in China.  Is this an attack?  Probably not.  But there may be vulnerabilities that can be exploited that should cause consumers pause.

There are two common design patterns.  Today I’m just going to discuss what we call “Calling Home”.  When we use the term, we are not referring to your home, but to a centralized management site.  In the case of Thing manufacturers, the site is likely offered by the manufacturer.

So you just bought that new digital video recorder and it offers a great new feature: you can program it wherever you are.  There are many such devices on the market today, such as a SlingBox.  How do those communications happen?

rendezvous-callhome

 

In the figure above, all your home devices sit behind your home router.  They’re generally allowed to connect to systems outside of your network, but systems outside are not able to connect in.  In part this is a security feature: your firewall will block incoming connections so that the entire world can’t attack you.  In part, however, it’s because the systems in your home are only using locally recognizable IP addresses.  And since your iPhone moves around, your home doesn’t know how to get to it.  Therefore, a rendezvous service is needed.  That’s what that cloud function is performing, and that is what those curved lines indicate.

The SlingBox on the left may not just be connecting for the sake of communicating with your smart phone.  It is probably also doing so for other reasons, such as receiving electronic program guide information.

In the world of IoT, that is a common design pattern.  Devices will need to communicate with their manufacturer web sites for all different reasons, but there is one common and important reason: devices will have bugs.  As manufacturers develop fixes, devices will need to learn of those fixes and install them.  Every modern-day operating system and browser has this feature.  All Things will need these features as well.  In fact, one big concern today is what happens when manufacturers do not offer fixes?  Then those vulnerabilities are out there for anyone to exploit.  This is a big problem in the developing world, where consumers often buy devices on the secondary market, long after manufacturers have intended them to be retired.

Could a device transmit private information to a manufacturer?  Sure.  In fact, Samsung got caught last year through their dreadful privacy policy where their televisions could have been listening and reporting conversations.

Here’s the rub: without extensive analysis, it’s hard to know exactly what is being exchanged between a device and a manufacturer.  Encryption will keep observers from seeing what is being exchanged.  At the same time, a lack of encryption would be as or more risky to consumer privacy.

When devices are able to communicate at all it is possible that they will be compromised.  It’s important to understand that there are risks with each Internet-enabled device.  But it’s also important to consider any benefit the communication will have.  A refrigerator or a heater that knows it is in need of repair can have a manufacturer contact the owner, for instance. That’s worth something to some people.  Judge the risks for yourself.

What should the best practices be in this space and what should consumers expect in products?  More on that over time, but feel free to answer those questions yourself for now.


iPhone image courtesy World Super Cars on Wikipedia.