Let’s Get Simple

A picture of a mess of wiresIn the summer of 2004 I gave an invited talk at the USENIX Technical Symposium entitled “How Do I Manage All Of This?”  It was a plea to the academics that they ease off of new features and figure out how to manage old ones.  Just about anything can be managed if you spend enough time.  But if you have enough of those things you won’t have enough time.  It’s a simple care and feeding argument.  When you have enough pets you need to be efficient about both.  Computers, applications, and people all require care and feeding.  The more care and feeding, the more chance for a mistake.  And that mistake can be costly.  According to one Yankee Group study in 2003, between thirty and fifty percent of all outages are due to configuration errors.  When asked by a reporter what I believed the answer was to dealing with complexity in the network, I replyed simply, “Don’t introduce complexity in the first place.”

It’s always fun to play with new toys.  New toys sometimes require new network features.  And sometimes those features are worth it.  For instance, the ability to consolidate voice over data has brought a reduction in the amount of required physical infrastructure.  The introduction of wireless has meant an even more drastic reduction.  In those two cases, additional configuration complexity was likely warranted.  In particular you’d want to have some limited amount of quality-of-service capability in your network.

Franciscan friar William of Ockham first articulated a principle in the 14th century that all other things being equal, the simplest solution is the best.  We balance that principle with a quote from Einstein who said, “Everything should be made as simple as possible, but not simpler.”  Over the next year I will attempt to highlight examples of where we have violated both of these statements, because they become visible in the public press.

Until then, ask yourself this: what functionality is running on your computer right now that you neither need nor want?  That very same functionality is a potential vulnerability.   And what tools reduce complexity?  For instance, here is some netstat output:

% netstat -an|more
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:993             0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:995             0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:587             0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:110             0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:2544          0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:817           0.0.0.0:*               LISTEN
udp        0      0 0.0.0.0:32768           0.0.0.0:*
udp        0      0 127.0.0.1:53            0.0.0.0:*
udp        0      0 0.0.0.0:69              0.0.0.0:*
udp        0      0 0.0.0.0:111             0.0.0.0:*
udp        0      0 0.0.0.0:631             0.0.0.0:*
udp        0      0 127.0.0.1:123           0.0.0.0:*
udp        0      0 0.0.0.0:123             0.0.0.0:*
udp        0      0 :::32769                :::*
udp        0      0 fe80::219:dbff:fe31:123 :::*
udp        0      0 ::1:123                 :::*
udp        0      0 :::123                  :::*

It’s difficult for an expert all of this stuff.  Heaven help all of us who aren’t experts.  So what do we do?  We end up running more programs to identify what we were running.  In other words?  That’s right.  Additional complexity.  What would have happened if we simply had the name of the program output with that line?  This is what lsof does, and why it is an example of reducing complexity through innovation.  Here’s a sample:

COMMAND     PID    USER   FD   TYPE DEVICE SIZE NODE NAME
xinetd     3837    root    5u  IPv4  10622       TCP *:pop3 (LISTEN)
xinetd     3837    root    8u  IPv4  10623       TCP *:pop3s (LISTEN)
xinetd     3837    root    9u  IPv4  10624       UDP *:tftp
named      3943   named   20u  IPv4  10695       UDP localhost:domain
named      3943   named   21u  IPv4  10696       TCP localhost:domain (LISTEN)
named      3943   named   24u  IPv4  10699       UDP *:filenet-tms
named      3943   named   25u  IPv6  10700       UDP *:filenet-rpc
named      3943   named   26u  IPv4  10701       TCP localhost:953 (LISTEN)
named      3943   named   27u  IPv6  10702       TCP localhost:953 (LISTEN)
ntpd       4026     ntp   16u  IPv4  10928       UDP *:ntp
ntpd       4026     ntp   17u  IPv6  10929       UDP *:ntp
ntpd       4026     ntp   18u  IPv6  10930       UDP localhost:ntp

Voting Machines: Thank Heavens for Academia

vote buttonOften times it is said that the purpose of academic research is to seek the truth, no matter where it leads.  The purpose of industry representatives is often to obscure the truths they do not like.  Such apparently was the case at a recent hearing of the Texas House of Representatives’ Committee on Elections.  These are the guys who are nominally supposed to ensure that each citizen of Texas gets an opportunity to vote, and that his or her vote is counted.  The committee provides oversight and legislation for electronic voting.

How secure is your electronic vote, compared to a paper ballet?  Can you have an electronic hanging chad?  A group of researchers have spent a fair amount of time answering that very question.  Drs Ed Felton & Dan Wallach, as well as others, have looked at numerous different voting systems, and found all sorts of little problems.  For instance, some voting machines are susceptible to virii, and if they get it they can give it to their peers.  That’s not a problem, according to the manufacturers’ spokesmen.  But who are we to believe?  An academician whose purpose is to advance the state of the art and find truths, or a spokesman, whose purpose is to obscure them?

There are mistakes made in many, if not all elections and surveys.  Here are just a few questions:

  • What is an acceptable rate of error?  As 2000 demonstrated, even a hand count of paper ballots can have problem.
  • Rather than prevaricate, why shouldn’t the vendors of these voting machines fix the problems that have been reported?
  • What sort of regulations are appropriate?  The spokesmen all but demanded a common standard in as much as they complained that there was none.

Conveniently Dr. Wallach has an answer to that last question.  His testimony recommends just that.

For what it’s worth, as an expatriate I do not expect to use a voting machine for quite some time, but rather a paper ballot.

Good Fences Make Good Neighbors

A FenceWhen I was about 13 years old, my neighbors put a pool in their back yard.  However, they failed to put a fence around it.  My sister at the time was only four years old, and there were many people her age in the neighborhood.  In our community there was an ordinance that required such fences, but the neighbors ignored it, as they did my parents’ pleas.

While you can question the wisdom of letting a four year old walk around on his or her own, at the time it was the norm for our community, and one day little Donald was on his own, dangling his feet in the neighbor’s unsupervised pool.  After running out of our house as fast as she could and pulling Donald away from the pool, my mother filed a complaint, causing the neighbors to have to pay a fine.  Donald’s parents could have sued.

Our neighbors created an attractive nuisance and needed to be held accountable. While not exactly the same, regularly updating your software with the latest versions does reduce a computer’s exposure to vulnerabilities.  What’s more, there is a well known network effect of doing so.  When you patch your software, not only do you protect your computer against attack by others, but you also prevent your computer from being used as a vehicle to attack others.  Put another way, not patching your software makes your system a nuisance to others.  The bad guys know this.  One study by Jianwei Zhuge, et al, shows that exploits often appear in the wild before or very shortly after a patch is released.  A position paper written by Ross Anderson, et al., for ENISA will tell you which vendors are better and which are worse at patching.

A new study released this week by people at the ETH, Google, and IBM shows that in the best case with Firefox, no more than 83% of users patch their browsers.  The worst case is Internet Explorer, where you are more than likely not to have the latest patch.

What does all this say?  First of all it says that Firefox is probably doing a pretty good job.  One wonders what is going on with the 17% of individuals who do not patch their browsers.  Perhaps we have another case of rational ignorance, as I discussed previously.  The study also says that Microsoft could do a better job.  Part of Microsoft’s problem is that they have previously released “security” patches that do more than fix security problems. Distribution of Windows Genuine Advantage, which has been called a form of spyware, degraded peoples’ trust in Microsoft.

Apple isn’t all that much better than Microsoft.  For one, their patch rates are actually slower than that of Microsoft.  For another, Safari 3 broke stuff, which is precisely why many people do not upgrade.  Sun and HP are even worse.

Much as we like to blame vendors, in some cases we have nobody to blame but ourselves.  Here is something to do.  Check that you are running the latest version of the software you use.  If you use anything more than the standard application suite for your computer, there is a very good chance you are out of date.

No Evidence That Data Breach Privacy Laws Work

Have you ever received a notice that your data privacy has been breached?  What the heck does that mean anyway?  Most of the time what it means is that some piece of information that you wouldn’t normally disclose to others, like a credit card or your social security number, has been released unintentionally, and perhaps maliciously (e.g., stolen).  About five years ago states began passing data breach privacy laws that required authorized possessors of such information to report to victims when a breach occurred.  There were basically two goals for such laws:

  • Provide individuals warning that they may have suffered identity theft, so that they can take some steps to prevent it, like blocking a credit card or monitoring their credit reports; and
  • Provide a more general deterrent by embarrassing companies into behaving better. “Sunlight as a disinfectant,” as Justice Brandeis wrote.[1]

A study conducted by Sasha Romanosky, Rahul Telang, and Alessandro Acquisti at CMU found that as of yet there can be no correlation found between these laws and identity theft rates.  This could be for many reasons why the correlation isn’t there.  First, actual usage of the stolen information seems to be only a small percentage.  Second, it may be that just because a light has been shined doesn’t mean that there is anything the consumer will be capable or willing to do.  For instance, suppose you buy something at your-local-favorite-website.com.  They use a credit card or billing aggregation service that has its data stolen, and so that service reports to you that your data has been stolen.  You might not even understand what that service has to do with you.  Even if you do, what are the chances that you would be willing to not use your-local-favorite-website.com again?  And if you hear about such a break-in from someone else, would it matter to you?  Economists call that last one rational ignorance.  In other words, hear no evil, see no evil.

Add to all of this that some people have said that there are huge loopholes in some of the laws.  At WEIS and elsewhere several not-so-innovative approaches were discussed about how some firms are getting around the need to disclose.

This paper is not the final word on the subject, but clearly work needs to be done to improve these laws so that they have more impact.  As longitudinal studies go, this one isn’t very long.  It’s possible we’ll see benefits further down the road.

[1]  The Brandeis quote could be found in the paper I cited (which is why I used it).

Off To New Hampshire

Many of us are geeks.  We like to think that just because we have a good idea other people will like it as well.  We’re particularly bad at user interface design and understanding the underlying economic drivers for technology.  As a case and point, why is it that IPv6 hasn’t taken IPv4’s place, even thought it has been in existence for nearly fifteen years and solves a real problem of address space shortage?  The answer can be found, I believe, in economics, which is to say that the motivations have not been there to spend the money to get people to move from one system to the other.

On Tuesday I am off to New Hampshire via Boston to attend the Workshop on Economics of Information Security (WEIS).  In past conferences, WEIS has covered such topics as when to disclose vulnerabilities, the economics of the insurance industry and cyberthreat insurance, digital media protection mechanisms, and the risks of new technology introduction.  One past paper that I particularly enjoyed discussed the risks of homo- versus heterogeneity in an enterprise.  It has long been an axiom that if you wanted to protect yourself from systemic failure you used redundant systems that are built using different methods.  In airplanes the rule is meant to keep passengers alive (although Airbus has flouted this idea, according to the Telegraph).

Cyberthreat insurance people take this to the extreme by not particularly liking even the idea of interoperability.  Their logic goes that any interoperating system can continue a cascading failure, and that is potentially true.  Of course, while an insurance salesman might want you to not have an accident, his management need some accidents to prove that insurance is necessary.  The extreme case of a cascading failure, however, has insurance people shaking in their boots.  They get away with insuring households and businesses against losses by (a) applying a reserve and (b) knowing that a fire or other natural accident can only cause so much damage in a local area.  In the case of a computer virus, they have no reason to believe that there is any locality, and so the policies tend to be very restrictive.

I have a few economic questions of my own to ask.  What will it take to motivate the adoption by a service provider  of a new authentication mechanism that would provide benefit to OTHER service providers?  In other words, how will service providers serve the common good?  In general, by the way, they do.  They recognize rightly that if they don’t cooperate on their own they will be made to do so under far less favorable terms.  But here is something new, and not old.  Introduction of new technology and new ways to cooperate is not exactly what they’re all looking for.  I am.  If we can find improved methods of authentication for end users we can surely reduce the value a PC represents to a criminal.

Of course this means we have to create a new authentication mechanism that actually does improve matters, but as my favorite theoreticians say, let’s assume that’s true, nevermind reality.  What then has to happen for the mechanism to be adopted by consumers and providers alike?

Going back to that earlier question of what will it take for IPv6 to get deployed, in this year’s WEIS Jean Camp, Hillary Elmore, and Brandon Stephens have produced a paper that puts the question into a formal economics context.  While the work is neither the beginning nor the end of the discussion, it is a very good continuation.

You can soon expect a post that discusses the outcome of this year’s conference.