The Challenges of CISOs

Are CISOs investing enough in protection? Do they have good visibility to threats?

Image
Aub Persian Zam Zam

Long ago there used to be a bar on Haight St. called Aub Persian Zam Zam, run by a cranky guy named Bruno. Bruno who hated everyone, and he preferred only to serve martinis.  If you walked in before 7:00pm, he told you that table service started at 8:00pm.  And if you walked in after 7:00pm, table service stopped at 6:00pm. As a customer, I felt a little like a Chief Information Security Officer (CISO). 

CISOs constantly face a challenge with their boards: how much to invest in security. If you haven’t been hacked, then you are accused of spending too much on protection (and might be out of a job); and if you have, then you spent too little (and might be out of a job).  But CISOs have to operate in the here and now. They don’t get to have the luxury of hindsight. What CISOs need is an appropriate level of investment to secure their charges and situational awareness to make good decisions.

Much is being made of the lax security that Solar Winds had. As Bruce Schneier pointed out in the New York Times, they had been hacked not just once, but several times. There was the attack on the company and then there was the attack on their customers. The attack on the customers involved the use of a DNS-based command and control (C&C) network, very stealthily crafted code, and the potential for an infected system to probe whatever was available to it at government and industrial installations across the globe. This may have been particularly damaging in the case of Solar Winds because the legitimate software could have stood in a privileged point within an enterprise, requiring access to lots of other core infrastructure. The Russians picked a really juicy target. They were, if you will, an incident waiting to happen, and happen it did. Solar Winds was detectable, but it required an appropriate investment in not only tooling but back-end expert services to provide situational awareness.

Not every target is quite so juicy. Most hackers hit web servers or laptops with various viruses. The soft underbelly of cybersecurity, however, are the control systems, who themselves have access to other infrastructure, as was demonstrated this past month, when a hacker attempted to poison a Florida city with lye. Assuming they have one, the Oldsmar CISO might have some explaining to do. How might that person do so, especially when it is the very system meant to protect the others? It starts by knowing how one compares to one’s peers in terms of expenditures. It’s possible to both under- and overspend.

Gordon Loeb Model

Optimal investment models for cybersecurity has been an ongoing area of research. The seminal Gordon-Loeb Model demonstrates a point of optimality and a point of diminishing returns for risk mitigation. The model doesn’t given you the shape of either curves. That was the next area of research.

For one, some things are easy to do, and some are hard; but the easy things are often not the right things to do. Low level cybersecurity professionals sometimes make the wrong choices, being risk seeking for big ticket items like device policy management, two-factor authentication, training, and auditing; while being risk adverse to matters that are within their control. Back in 2015, Armin Sarabi, Parinaz Naghizadeh, Yang Liu, and Mingyan Liu set out to answer this question. The table below liberally borrowed from their paper shows a risk analysis of different sectors.

Sarabi et al, Prioritizing Security Spending: A Quantitative Analysis of Risk Distributions for Different Business Profiles, Workshop on the Economics of Information Security, 2015.

What this says is that based on reports received, configuration errors were a substantial risk factor pretty much everywhere but accommodation and food services, but they suffered because employees share credentials. It was a limited survey, and surely the model has changed since then. In the intervening time, cloud computing has become far more prevalent, and we have seen numerous state actors take on a much bigger, and nastier, role. It’s useful, however, is for a CISO to have situational awareness of what sorts of common risks are being encountered, and to have some notion as to what best practices are to counter those risks, so that whatever a firm spends is effective.

Expenditures alone don’t guarantee against break-ins. Knowing one’s suppliers and their practices is also critical. Knowing that Verkada had sloppy practices would have both deterred some from using their cameras, and in turn encouraged that provider to clean up their act. Again, situational awareness matters.


Gordon Loeb Diagram by By Luca Rainieri – Own work, CC BY-SA 4.0

Where a bad review really makes for poor security

Releasing unstable software harms cybersecurity for everyone, not just those who install the product.

Most consumers do not take the time to upgrade their devices simply because vendors want them to: there has to be something in it for me.  Apple, on the other hand, has been an exception.  Studies have repeatedly shown that Apple users do regularly upgrade their phones.  Just one month after release, their latest version was installed on 52% of their devices.  By comparison, summing all Android releases from 2015 to present gets you that same number, with the latest releases coming in around 20% of the total.

This becomes a Big Deal when we start talking about vulnerabilities, and zero-day exploits.  If there is a bug in your device and it is running an older version of the code, and you do not update, then that device can be used to attack you or someone else.  This is something that Microsoft learned the hard way in the last decade when it snuck in extra software in a security update, losing trust and confidence and willingness of their users.

In his review, Gordon Kelly has told his Forbes readers not to upgrade to the latest Apple iOS release precisely because it may be too risky, that the release itself was rushed.  When considering release timing, any vendor always has to balance stability and testing against other feature availability and security.  Apple may well have gotten the balance wrong this time.  The review in and of itself harms cybersecurity, not because the reviewer is wrong, but because the result will be that fewer people will have corrected whatever vulnerabilities exist in the release (as of this writing information about what is fixed hasn’t been disclosed).  Moreover, such reviews reinforce a bad behavior- to delay upgrading.  I call it a bad behavior because it puts others at risk.

This isn’t something that can be fixed with a magic wand.  We certainly cannot fault Mr. Kelly for publishing his analysis and recommendations.  If we wait for perfect security, we will never see another feature release.  On the other hand, if things get too rushed, we see such bad reviews.  Perhaps this argues that O/S vendors like Apple and Google should continue to provide security-only releases that overlap their major releases, at least until they are stable, which is what other vendors such as Microsoft and Cisco do.  It costs money and people to support multiple releases, but it might be the right thing to do for the billions of devices that are each and every one a point of attack.

Ain’t No Perfect. That’s why we need network protection.

If Apple can blow it, so too can the rest of us. That’s why a layered defensive approach is necessary.

When we talk about secure platforms, there is one name that has always risen to the top: Apple.  Apple’s business model for iOS has been repeatedly demonstrated to provide superior security results over its competitors.  In fact, Apple’s security model is so good that governments feel threatened enough by it that we have had repeated calls for some form of back door into their phones and tablets.  CEO Tim Cook has repeatedly taken the stage to argue for such strong protection, and indeed I personally have  friends who I know take this stuff so seriously that they lose sleep over some of the design choices that are made.

And yet this last week, we learned of a vulnerability that was as easy to exploit as to type “root” twice in order to gain privileged access.

Wait what?

 

Wait. What?

 

 

Ain’t no perfect.

If the best and the brightest of the industry can occasionally have a flub like this, what about the rest of us?  I recently installed a single sign-on package from Ping Identity, a company whose job it is to provide secure access.  This simple application that generates cryptographically generated sequences of numbers to be used as passwords is over 70 megabytes, and includes a complex Java runtime environment (JRE).  How many bugs remain hidden in those hundreds of thousands of lines of code?

Now enter the Internet of Things, where manufacturers of devices that have not traditionally been connected to the network have not been expert at security for decades.  What sort of problems lurk in each and every one of those devices?

It is simply not possible to assure perfect security, and because computers are designed by imperfect humans, all these devices are imperfect.  Even devices that we believe are secure today will have vulnerabilities exposed in the future.  This is one of the reasons why the network needs to play a role.

The network stands between you and attackers, even when devices have vulnerabilities.  The network is best in a position to protect your devices when it knows what sort of access a device needs to operate properly.  That’s your washing machine.  But even for your laptop, where you might want to access whatever you want to access, whenever you want to access it, through whatever system you wish to use, informing the network makes it possible to stop all communications that you don’t want.  To be sure, endpoint manufacturers should not rely solely on network protection.  Devices should be built with as much protection as is practicable and affordable.  The network provides an additional layer of protection.

Endpoint manufacturers thus far have not done a good job in making use of the network for protection.  That requires a serious rethink, and Apple is the posture child as to why.  They are the best and the brightest, and they got it wrong this time.

Pew should evolve its cybersecurity survey

Pew should evolve the questions they are asking and the advice they are giving based on how the threat environment is changing. But they should keep asking.

Last year, Pew Research surveyed just over 1,000 people to try to get a feel for how informed they are about cybersecurity.  That’s a great idea because it informs us as a society as to how well consumers are able to defend themselves against common attacks.   Let’s consider some ways that this survey could be evolved, and how consumers can mitigate certain common risks.  Keep in mind that Pew conducted the survey in June of last year in a fast changing world.

Several of the questions related to phishing, Wifi access points and VPNs.  VPNs have been in the news recently because of the Trump administration’s and Congress’  backtracking on privacy protections.  While privacy invasion by service providers is a serious problem, accessing one’s bank at an open access point is probably considerably less so.  There are two reasons for this.  First, banks almost all make use of TLS to protect communications.  Attempts to fake bank sites by intercepting communications will, at the very least produce a warning that browser manufacturers have made increasingly difficult to bypass.  Second, many financial institutions make use of apps in mobile devices that take some care to validate that the user is actually talking to their service.  In this way, these apps actually mark a significant reduction in phishing risk.  Yes, the implication is that using a laptop with a web browser is a slightly riskier means to access your bank than the app it likely provides, and yes, there’s a question hiding there for Pew in its survey.

Another question on the survey refers to password quality.  While this is something of a problem, there are two bigger problems hiding that consumers should understand:

  • Reuse of passwords.  Consumers will often reuse passwords simply because it’s hard to remember many of them.  Worse, many password managers themselves have had vulnerabilities.  Why not?  It’s like the apocryphal Willie Sutton quote about robbing banks because that’s where the money is.  Still, with numerous break-ins, such as those that occurred with Yahoo! last year*, and the others that have surely gone unreported or unnoticed, re-use of passwords is a very dangerous practice.
  • Aggregation of trust in smart phones.  As recent articles about American Customs and Border Patrol demanding access to smart phones demonstrate, access to many services such as Facebook, Twitter, and email can be gained just by gaining access to the phone.  Worse, because SMS and email are often used to reset user passwords, access to the phone itself typically means easy access to most consumer services.

One final area that requires coverage: as the two followers of my blog are keenly aware, IoT presents a whole new class of risk that Pew has yet to address in its survey.

The risks I mention were not well understood as early as five years ago.  But now they are, and they have been for at least the last several years.  Pew should keep surveying, and keep informing everyone, but they should also evolve the questions they are asking and the advice they are giving.


* Those who show disdain toward Yahoo! may find they themselves live in an enormous glass house.

Krebs attacked: IoT devices blamed, and MUD could help

CybercrimeIt’s rare that hackers give you a gift, but last week that’s exactly what happened.  Brian Krebs is one of the foremost security experts in the industry, and his well known web site krebsonsecurity.com was brought down due to a distributed denial of service (DDoS) attack.  Attackers made use of what is said to be the largest botnet ever to attack Akamai, Kreb’s content service provider.

Why would one consider this a gift?  First of all, nobody was hurt.  This attack took down a web site that is not critical to anyone’s survival, not even Krebs’, and the web site was rehomed and back online in a very short period of time.

Second, the attackers revealed at least some of their capabilities by lighting up the network of hacked devices for researchers to examine and eventually take town.  One aspect of this attack is the use of “IoT” devices, or non-general purpose computers that are used to control some other function.  According to Krebs, the attacks made use of thermostats, web cameras, digital video recorders (DVRs) and, yes, Internet routers.  The attacks themselves created an HTTP connection to the web site, retrieved a page, and closed.  That’s a resource intensive attack from the defense standpoint.

Let’s ask this question: why would any of Mudpitthose systems normally talk to anything other than a small number of cloud services that are intended to support them?  This is what Manufacturer Usage Descriptions (MUD) is meant to defend against.  MUD works by providing a formal language and mechanism for manufacturers to specify which systems a device is designed to connect with.  The converse, therefore, is that the network can prevent the device from both being attacked and attacking others.  The key to all of this are manufacturer and their willingness to describe these devices.  The evolving technical details of MUD can be found in an Internet Draft, and you can create a test MUD file against that draft by using MUD File Maker.  I’ll go into more detail about MUD File Maker in a later post.

Would MUD eliminate all attacks?  No, but MUD adds an additional helpful layer of protection to those manufacturers and networks should use.

This time it was a blog that was taken down.  We are in a position to reduce attacks the next time, when they may be more serious.  That’s the gift hackers gave us this time.  Now we just need to act.