Creating standard inventories for Ubuntu systems

Here’s a tool to create standard SBOMs from Ubuntu distribution information.

Subject of Cybercrime

As I’ve previously mentioned on this blog, software bills of materials (SBOMs) are software ingredient lists similar to what you would find on a can of soup. The purpose of these lists is to determine if something bad is in the mix, so that administrators can figure out where their risks are. This is why President Biden’s Executive Order from last May specifically called them out.

Here now is a tool that I’ve just posted to pypi called apt2sbom. This tool is specific to Ubuntu. Similar tools can be built for other distributions. This tool will take the already existing information on a Ubuntu system and collect it into one of the standard formats, such as SPDX or CycloneDX.

% pip3 install apt2sbom
[...]
% apt2sbom -h

usage: apt2sbom [-h] (-j | -y | -c) [-p]

generate SPDX file from APT inventory

optional arguments:
  -h, --help       show this help message and exit
  -j, --json       Generate JSON SPDX output
  -y, --yaml       Generate YAML SPDX output
  -c, --cyclonedx  Generate CycloneDX JSON output
  -p, --pip        Include PIP files

The resulting file is then suitable for import into tooling that can spot vulnerabilities in particular versions of software.

The package is a little on the early side. There might still be a few bugs here or there. If you find one, just post it to the source repository as an issue.

Would this be considered a complete SBOM? Probably not, because there may be software installed on a system that is not part of either the apt or python distributions. However, it’s fairly easy to add additional elements into these files, particular the JSON ones.

Keeping that software up to date: how hard could it be?

Think you can keep everything up to date with the latest security fixes? Think again.

One common piece of advice you will hear from cybersecurity experts is that you should always keep your software up-to-date, so that vulnerabilities can be corrected. We like to believe that consumers are the biggest offenders in terms of keeping old software around. After all, Grandpa doesn’t always know how to upgrade his iPhone.

Can of Campbells Chicken Soup

This isn’t so easy for professionals. Let’s take a web site. Your average web site is composed of numerous servers, each running all manner of code, from the operating system to standard applications like Apache to web support code such as Python, PHP, and back end services, like MongoDB. ALL of it has to be updated.

What does the inventory look like? This has been the work of a group of people who have created something known as software bills of materials or SBOMs. One could think of an SBOM as the ingredient list on the back of a can of soup, only there’s software inside.

So how does one generate the inventory? And this is where things can get a bit interesting. On any reasonable UNIX machine there exists a package manager. In the case of Ubuntu, that would be apt, which is built on top of Debian’s dpkg. One can simply type “apt list –installed” to get the installed packages on a system, right?

WRONG.

Of course that will get you software that apt has installed, but if your site runs with python, then you might need to get a software list by using Python’s package manager, pip. In fact, “pip freeze” provides this information, and that will get you most, but likely not all, python packages. Now repeat for node.js’s npm, and others (assuming you can find all of them).

Now you might want to update those packages. That was the problem statement, after all. This should be simple enough, right? Well, let’s start with apt. It generally is simple enough, at least to start with: one simply runs, “apt upgrade -y” and software package upgrades happen. Of course, you have to test all of your code after this, as apt won’t do that for you. Your CI/CD process is hopefully doing this for you.

Next we go to Python. pip’s upgrade command requires a list of packages. Conveniently, we have one that we froze from above. However, it turns out that not all python packages are managed by pip; and it knows this, and pip will happily blow chunks when you try to upgrade a package it does own that requires an upgrade for something it doesn’t own. In the Ubuntu world, there are a few good examples of this, Cairo and MongoDB to name a few. Instead, these packages are managed by apt. Well that’s all good, since we can use the apt upgrade process, right?

WRONG.

Often times these packages are installed have or meet dependencies of their won that might go unmet as part of an upgrade.

Alpine Linux has applied an interesting solution to all of this: provide common python packages in their own ‘apk’ package manager. By using apk, you assuredly will not get the latest version of a package, but you will get consistent upgrade behavior. But this presents its own set of problems:

A publication hierarchy, starting from your source on Github, and then going who knows where..

What you are looking at is a publication hierarchy. Now we have software in multiple distributions, deriving from a repository, probably from different branches. It’s possible that there are even multiple repositories for the same code. If we are lucky, the repo owner is doing the integration to push updates to Ubuntu, Pypi, and Alpine. But we are almost never going to be that lucky, if for no other reason, a lack of authorization for all of them. And so we’ll end up with a mix of push and pull. It’s the pull that causes the version skew, and this is just a single distribution.

Now, I’ve picked on Python in this post, but pip and pypi actually do a pretty reasonable job of managing what packages it installed, so long as there aren’t dependencies to stuff it didn’t install.

So what does this mean in terms of managing your product or package?

I’m not ready to make any sweeping recommendations about what to do. One option is to just deal, and upgrade when you can. The problem is that your velocity will be slower than if you pull all source from their repos directly. On the other hand, doing the latter is a whole lot of work, requiring a whole lot of expertise when builds blow up.

This is a good example of where OSS needs to mature a bit more.


Chicken Noodle Soup by By Willis Lam – Campbell’s Chicken Noodle Soup, CC BY-SA 2.0.

The Challenges of CISOs

Are CISOs investing enough in protection? Do they have good visibility to threats?

Image
Aub Persian Zam Zam

Long ago there used to be a bar on Haight St. called Aub Persian Zam Zam, run by a cranky guy named Bruno. Bruno who hated everyone, and he preferred only to serve martinis.  If you walked in before 7:00pm, he told you that table service started at 8:00pm.  And if you walked in after 7:00pm, table service stopped at 6:00pm. As a customer, I felt a little like a Chief Information Security Officer (CISO). 

CISOs constantly face a challenge with their boards: how much to invest in security. If you haven’t been hacked, then you are accused of spending too much on protection (and might be out of a job); and if you have, then you spent too little (and might be out of a job).  But CISOs have to operate in the here and now. They don’t get to have the luxury of hindsight. What CISOs need is an appropriate level of investment to secure their charges and situational awareness to make good decisions.

Much is being made of the lax security that Solar Winds had. As Bruce Schneier pointed out in the New York Times, they had been hacked not just once, but several times. There was the attack on the company and then there was the attack on their customers. The attack on the customers involved the use of a DNS-based command and control (C&C) network, very stealthily crafted code, and the potential for an infected system to probe whatever was available to it at government and industrial installations across the globe. This may have been particularly damaging in the case of Solar Winds because the legitimate software could have stood in a privileged point within an enterprise, requiring access to lots of other core infrastructure. The Russians picked a really juicy target. They were, if you will, an incident waiting to happen, and happen it did. Solar Winds was detectable, but it required an appropriate investment in not only tooling but back-end expert services to provide situational awareness.

Not every target is quite so juicy. Most hackers hit web servers or laptops with various viruses. The soft underbelly of cybersecurity, however, are the control systems, who themselves have access to other infrastructure, as was demonstrated this past month, when a hacker attempted to poison a Florida city with lye. Assuming they have one, the Oldsmar CISO might have some explaining to do. How might that person do so, especially when it is the very system meant to protect the others? It starts by knowing how one compares to one’s peers in terms of expenditures. It’s possible to both under- and overspend.

Gordon Loeb Model

Optimal investment models for cybersecurity has been an ongoing area of research. The seminal Gordon-Loeb Model demonstrates a point of optimality and a point of diminishing returns for risk mitigation. The model doesn’t given you the shape of either curves. That was the next area of research.

For one, some things are easy to do, and some are hard; but the easy things are often not the right things to do. Low level cybersecurity professionals sometimes make the wrong choices, being risk seeking for big ticket items like device policy management, two-factor authentication, training, and auditing; while being risk adverse to matters that are within their control. Back in 2015, Armin Sarabi, Parinaz Naghizadeh, Yang Liu, and Mingyan Liu set out to answer this question. The table below liberally borrowed from their paper shows a risk analysis of different sectors.

Sarabi et al, Prioritizing Security Spending: A Quantitative Analysis of Risk Distributions for Different Business Profiles, Workshop on the Economics of Information Security, 2015.

What this says is that based on reports received, configuration errors were a substantial risk factor pretty much everywhere but accommodation and food services, but they suffered because employees share credentials. It was a limited survey, and surely the model has changed since then. In the intervening time, cloud computing has become far more prevalent, and we have seen numerous state actors take on a much bigger, and nastier, role. It’s useful, however, is for a CISO to have situational awareness of what sorts of common risks are being encountered, and to have some notion as to what best practices are to counter those risks, so that whatever a firm spends is effective.

Expenditures alone don’t guarantee against break-ins. Knowing one’s suppliers and their practices is also critical. Knowing that Verkada had sloppy practices would have both deterred some from using their cameras, and in turn encouraged that provider to clean up their act. Again, situational awareness matters.


Gordon Loeb Diagram by By Luca Rainieri – Own work, CC BY-SA 4.0

Where a bad review really makes for poor security

Releasing unstable software harms cybersecurity for everyone, not just those who install the product.

Most consumers do not take the time to upgrade their devices simply because vendors want them to: there has to be something in it for me.  Apple, on the other hand, has been an exception.  Studies have repeatedly shown that Apple users do regularly upgrade their phones.  Just one month after release, their latest version was installed on 52% of their devices.  By comparison, summing all Android releases from 2015 to present gets you that same number, with the latest releases coming in around 20% of the total.

This becomes a Big Deal when we start talking about vulnerabilities, and zero-day exploits.  If there is a bug in your device and it is running an older version of the code, and you do not update, then that device can be used to attack you or someone else.  This is something that Microsoft learned the hard way in the last decade when it snuck in extra software in a security update, losing trust and confidence and willingness of their users.

In his review, Gordon Kelly has told his Forbes readers not to upgrade to the latest Apple iOS release precisely because it may be too risky, that the release itself was rushed.  When considering release timing, any vendor always has to balance stability and testing against other feature availability and security.  Apple may well have gotten the balance wrong this time.  The review in and of itself harms cybersecurity, not because the reviewer is wrong, but because the result will be that fewer people will have corrected whatever vulnerabilities exist in the release (as of this writing information about what is fixed hasn’t been disclosed).  Moreover, such reviews reinforce a bad behavior- to delay upgrading.  I call it a bad behavior because it puts others at risk.

This isn’t something that can be fixed with a magic wand.  We certainly cannot fault Mr. Kelly for publishing his analysis and recommendations.  If we wait for perfect security, we will never see another feature release.  On the other hand, if things get too rushed, we see such bad reviews.  Perhaps this argues that O/S vendors like Apple and Google should continue to provide security-only releases that overlap their major releases, at least until they are stable, which is what other vendors such as Microsoft and Cisco do.  It costs money and people to support multiple releases, but it might be the right thing to do for the billions of devices that are each and every one a point of attack.

Ain’t No Perfect. That’s why we need network protection.

If Apple can blow it, so too can the rest of us. That’s why a layered defensive approach is necessary.

When we talk about secure platforms, there is one name that has always risen to the top: Apple.  Apple’s business model for iOS has been repeatedly demonstrated to provide superior security results over its competitors.  In fact, Apple’s security model is so good that governments feel threatened enough by it that we have had repeated calls for some form of back door into their phones and tablets.  CEO Tim Cook has repeatedly taken the stage to argue for such strong protection, and indeed I personally have  friends who I know take this stuff so seriously that they lose sleep over some of the design choices that are made.

And yet this last week, we learned of a vulnerability that was as easy to exploit as to type “root” twice in order to gain privileged access.

Wait what?

 

Wait. What?

 

 

Ain’t no perfect.

If the best and the brightest of the industry can occasionally have a flub like this, what about the rest of us?  I recently installed a single sign-on package from Ping Identity, a company whose job it is to provide secure access.  This simple application that generates cryptographically generated sequences of numbers to be used as passwords is over 70 megabytes, and includes a complex Java runtime environment (JRE).  How many bugs remain hidden in those hundreds of thousands of lines of code?

Now enter the Internet of Things, where manufacturers of devices that have not traditionally been connected to the network have not been expert at security for decades.  What sort of problems lurk in each and every one of those devices?

It is simply not possible to assure perfect security, and because computers are designed by imperfect humans, all these devices are imperfect.  Even devices that we believe are secure today will have vulnerabilities exposed in the future.  This is one of the reasons why the network needs to play a role.

The network stands between you and attackers, even when devices have vulnerabilities.  The network is best in a position to protect your devices when it knows what sort of access a device needs to operate properly.  That’s your washing machine.  But even for your laptop, where you might want to access whatever you want to access, whenever you want to access it, through whatever system you wish to use, informing the network makes it possible to stop all communications that you don’t want.  To be sure, endpoint manufacturers should not rely solely on network protection.  Devices should be built with as much protection as is practicable and affordable.  The network provides an additional layer of protection.

Endpoint manufacturers thus far have not done a good job in making use of the network for protection.  That requires a serious rethink, and Apple is the posture child as to why.  They are the best and the brightest, and they got it wrong this time.