The Yahoo! Breach: What it means to you

Steps you should take after the Yahoo! breach.

yahooYesterday, Yahoo! announced that at least 500 million accounts have been breached.  This means that information you gave Yahoo! may be in the hands of hackers, but it could also mean a lot more. The New York Times has an excellent interactive tool today that demonstrates how much of your information may have leaked, not just from Yahoo! but from other breaches.

Not only should people change their Yahoo! passwords, but it is also important for people to review all passwords and information shared with Yahoo!  In particular:

  1. Many people use the same password across multiple accounts.  If you did this, you should change passwords on all systems where that password was used.  When you do, you should see to it that no passwords are shared between two systems.
  2. Hackers are smart.  If you only tweak the same password just a little bit for use on multiple systems, a determined hacker or more likely a determined script may well break into other accounts.  For example, if your Yahoo! password was DogCatY! and your E-Bay Password were DogCatEBay, you should assume the E-Bay account is broken as well.
  3. This means you should keep a secure record of what passwords are used where, for just this sort of eventuality.  By “secure” I mean encrypted and local.  Having two pristine USB keys (one for backup) is ideal, where the contents are encrypted at the application layer.  I also make use of Firefox’s password manager.  That in itself is a risk, because if Firefox is hacked your passwords may be gone as well.
  4. Unfortunately passwords may not be the only information hackers have. Yahoo! has previously made use of so-called “backup security questions”.  Not only is it important to disable those questions, but it is important to first review them to see where else you may have used them.  Security questions are a horrible idea for many reasons: they may reveal private aspects of your life, much of which might be discovered anyway.  Sites like United Airlines recently implemented security questions.  My recommendation: choose random answers and record them in a secure place that is separate from your passwords.
  5. It is possible that hackers may have read any email you received on Yahoo!  In particular, one should review any financial accounts where information is transmitted to Yahoo!
  6. Use of cloud-based storage as a backup for your passwords should be viewed with great suspicion.  There have been a number of such tools that themselves have been found to be vulnerable.
  7. Hackers may have your cell phone number, for those who use SMS as secondary authentication.  While SMS is not secure communication, the chances of it being hacked are relatively low.  The safest practice is not to rely solely on SMS for authentication.  My bank uses both a secret and an SMS message, relying on the tried and true two-factor authentication approach of something you have and something you know.  A better solution is a secret and an app with a secure push notification.  This is what MasterCard has done in Europe.

These suggestions are good for the sort of mass breach that we are seeing with Yahoo!  In addition, one has to be careful with the amount of trust placed in a cell phone.  If the phone is lost, you should assume that hackers will be able to get into it.  Keeping a record of the applications you use, particularly those that have financial or security implications, will help you recover from the loss.

These suggestions are written with the notion that Yahoo! is not going to be the only site that will have had this problem.  Although not to this scale, we’ve seen this sort of thing before, and we will see it again.  I’ll have more to say about this from an industry perspective in a while.


Yahoo picture by Sebastian Bergmann – originally posted to Flickr as Yahoo!, CC BY-SA 2.0

It doesn’t matter that much that Apple and Google encrypts your phone

Apple’s and Google’s announcements that they will encrypt information on your phone are nice, but won’t help much. Most data is in the cloud, these days; and your protections in the cloud are governed by laws of numerous countries, almost all of which have quite large exceptions.

CybercrimeAt the Internet Engineering Task Force we have taken a very strong stand that pervasive surveillance is a form of attack.  This is not a matter of lack of trust of any one organization, but rather a statement that if one organization can snoop on your information, others will be able to do so as well, and they may not be so nice as the NSA.  The worst you can say about the NSA is that a few analysts got carried away and spied on their partners.  With real criminals it’s another matter.  As we have seen with Target, other large department stores, and now JP Morgan, theirs is a business, and you are their commodity, in the form of private information and credit card numbers.

So now here comes Apple, saying that they will protect you from the government.  Like all technology, this “advance” has its pluses and minuses.  To paraphrase a leader in the law enforcement community, everyone wants their privacy until it’s their child at risk.  However, in the United States, at least, we have a standard that the director of the FBI seems to have forgotten- it’s called probable cause.  It’s based on a dingy pesky old amendment to the Constitution which states:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

So what happens if one does have probable cause?  This is where things get interesting.  If one has probable cause to believe that there is an imminent threat to life or property and they can’t break into a phone, then something bad may happen.  Someone could get hurt, for instance.  Is that Apple’s fault?  And who has the right to interpret and enforce the fourth amendment?  If Apple has a right to do so, then do I have the right to interpret what laws I will?  On the other hand, Apple might respond that it has no responsibility to provide law enforcement anything, and all it is doing is exercising the right of free speech to deliver a product that others use to communicate with.  Cryptographer and Professor Daniel Bernstein successfully argued this case in the 9th Circuit in the 1990s.  And he was right to do so, because going back to the beginning of this polemic, even if you believe your government to be benevolent, if it can access your information, so can a bad guy, and there are far more bad guys out there.

Apple hasn’t simply made this change because it doesn’t like the government.  Rather, the company has recognized that for consumers to put private information into their phone, they must trust the device to not be mishandled by others.  At the same time, Apple has said through their public statements that information that goes into their cloud is still subject to lawful seizure.  And this brings us back to the point that President Obama made at the beginning of the year: government risk isn’t the only form of risk.  The risk remains that private aggregators of information – like Apple and Google or worse, Facebook– will continue to use your information for whatever purposes they see fit.  If you don’t think this is the case, ask how much you pay for their services?

And since most of the data about your or that you own is either in the cloud or heading to the cloud, you might want to worry less about the phone or tablet, and more about where your data actually resides.  If you’re really concerned about governments, then you might also want to ask this question:  which governments can seize your data?  The answer to that question is not straight forward, but there are three major factors:

  1. Where the data resides;
  2. Where you reside;
  3. Where the company that controls the data resides.

For instance, If you reside in the European Union, then nominally you should receive some protection from the Data Privacy Directive.  Any company that serves European residents has to respect the rights specified in that.  On the other hand, there are of course exceptions for law enforcement.  If a server resides in some random country, however, like the Duchy of Grand Fenwick, perhaps there is a secret law that states that operators must provide the government all sorts of data and must not tell anyone they are doing so.  That’s really not so far from what the U.S. government did with National Security Letters.There’s a new service that Cisco has rolled out, called the Intercloud that neatly addresses this matter for large enterprises, providing a framework to keep some data local, and some data in the cloud, and the enterprise has some control over which.  Whether that benefit will extend to consumers is unclear.In the end I conclude that people who are truly worried about their data need to consider what online services they use, including Facebook, this blog you are reading right now, Google, Amazon, or anyone else.  They also have to consider how if at all they are using the cloud.  I personally think they have to worry less about physical devices, and that largely speaking Apple’s announcement is but a modest improvement in overall security.  The same could be said for IETF efforts.

Should the ITU Handle Cybersecurity or Cybercrime?

Cybercrime and cybersecurity are two very important topics that are largely being lost in the noise around the American elections, the Arab Spring, or the European banking crisis.  Nevertheless, there is an attempt by the ITU and some governments to take a more active role in this space.

Roughly defined, cybercrime is a crime that occurs or is facilitated by computers.  Cybersecurity is the actions taken to protect against cybercrime.  This includes protection of devices so that they don’t get broken into, and remediation.

Cybercrime itself is a complex issue.  It relates to many things, including fraud, data theft, privacy theft, and just about any criminal endeavor that happened before the term “cyber” ever came to be.  There’s a great paper by a laundry list of Who’s Who in the economics of cybersecurity that proposes methods of estimating actual losses, breaking down crime into various categories.  Statistics in this space are remarkably fluid- that is, there are poor standards for data collection.

As it turns out, there is a treaty on cybercrime, conveniently called The Convention on Cybercrime, developed in the Council of Europe.  Nearly all of Europe, as well as the U.S. and a number of other countries have ratified this treaty, and there other signatories.  Research from the University of Singapore has already shown that either accession to the treaty or even becoming congruent with it will reduce a country’s cybercrime rate.  While the causalities are not clearly explained in that paper, one part is obvious: the first part of the treaty is what amounts to a best practices document for governments, on how they should develop legislation.

The treaty itself is fairly involved and took many years to get as many signatures as it did.  It has to deal with diverse societies who have differing constitutional views on freedom of speech and expression, as well as on due process.

The Secretary General of the ITU and his staff, as well as a few governments, have been under the impression that the ITU could do a better job than what was done by the Council of Europe.  There is little chance of this happening, and in all likelihood, they would make matters worse, if for no other reason (and there are other reasons) that anyone who already signed the Convention would have to reconcile differences between that and whatever would be created by the ITU.

There are other reasons the ITU cannot do better, not least of which is that they lack the technical expertise to actively engage in cybersecurity.  Part of the problem is that most Internet standards are not ITU standards, but come from elsewhere.  While the ITU has any number of standards involving fiber optics management, and good codec support, the computer you’re reading this blog on uses mostly the work of others.  Another reason is that the state of the art in both cybercrime and cybersecurity is rapidly moving, beyond the ITU’s capability to adapt.  Here’s just one example: contrary to what people had thought, the battle ground for cybercrime has not really moved to mobile devices.  As we’ve previously discussed, this has a lot to do with the update mechanisms and business models in play, but the most notable one being that applications on the iPhone in particular are both reviewed by Apple and signed.  The only iPhone you hear about being vulnerable is the one that has been cracked by the owner, and that doesn’t account for a whole lot.

One WCIT proposal that refers to spam as a threat demonstrates how far off some governments are on the subject.  Spam itself has never really been much of a threat, but more of an annoyance.  80-90% of it is never delivered to the end user, and most Evil Doers have moved on to more sophisticated approaches, such as spear phishing.  Worse, the ITU-T’s study group 17 had to take years simply to come up with a definition of spam, when it really was a problem.

This is not to say that the ITU shouldn’t have a role to play with cybersecurity.  The ITU has extraordinarily access to governments of developing countries, and can work with them to improve their cybersecurity posture, through training and outreach.  In fact they do some of this in their Development or ITU-D Sector.  One thing that the D sector has done recently has been to put developing governments in touch with FIRST, the organization that coordinates discussion among Computer Incident Response Teams or CIRTs.  But the ITU should give up any idea that it can play more of a role than outreach and capacity building, all of which should be done in consultation with actual experts.

Hello Insecurity, Goodbye Privacy. Thank you, President Obama

Some people say that Internet Security is an oxymoron, because we hear so much about the different ways in which hackers and criminals break into our data, steal our identities, and even use information to commit “real world” crimes like burglary, when it becomes clear that someone’s gone on vacation.  Well now the Obama Administration along with the FBI and NSA are proposing to make things worse, according to an article in today’s New York Times.

According to the Times, the government is going to propose requiring that developers give up on one of the key principals of securing information– use of end to end encryption, the argument being that law enforcement does not have the visibility to information they once had, say, in the Nixon era, where the NSA acted as a vacuum cleaner and had access to anything.

As our friend Professor Steve Bellovin points out, weakening security of the Internet for law enforcement also weakens it for benefit of criminals.  Not a month ago, for instance, David Barksdale was fired from Google for violating the privacy of teenagers.  He could do that because communications between them were not encrypted end-to-end.  (Yes, Google did the right thing by firing the slime).

This isn’t the first time that the government has wanted the keys to all the castles, since the invention of public key cryptography.  Some of us remember the Clipper chip and a government-mandated key escrow system that the Clinton Administration wanted to mandate in the name of law enforcement.  A wise friend of mine said, and this applies equally now, “No matter how many people stand between me and the escrow, there exists a value of money for me to buy them off.”  The same would be true here, only it would be worse, because in this case, the government seems not to be proposing a uniform technical mechanism.

What’s worse– this mandate will impact only law abiding citizens and not criminals, as the criminals will encrypt data anyway on top of whatever service they use.

What you can do: call your congressman now, and find out where she or he stands.  If they’re in favor of such intrusive policy, vote them out.

Wrap-up of this year’s WEIS

This year’s Workshop on the Economics of Information Security (WEIS2010) enlightened us about Identity, privacy, and the insecurity of the financial payment system, just to name a few presentaitons.

Every year I attend a conference called the Workshop on Economics of Information Security (WEIS), and every year I learn quite a bit from the experience.  This year was no exception.  The conference represents an interdisciplinary approach to Cybersecurity that includes economists, government researchers, industry, and of course computer scientists.  Run by friend and luminary Bruce Schneier, Professor Ross Anderson from Cambridge University, and this year with chairs Drs. Tyler Moore and Allan Friedman, the conference includes an eclectic mix of work on topics such as the cyber-insurance (usually including papers from field leader Professor Rainer Böhme, soon of University of Münster), privacy protection, user behavior, and understanding of the underground economy, this year’s conference had a number of interesting pieces of work.  Here are a few samples:

  • Guns, Privacy, and Crime, by Allesandro Acquisti (CMU) and Catherine Tucker (MIT), provides an insight into how addresses of gun permit applicants posted on a Tennessee website does not really impact their security one way or another, contrary to arguments made by politicians.
  • Is the Internet for Porn? An Insight Into the Online Adult Industry – Gilbert Wondracek, Thorsten Holz, Christian Platzer, Engin Kirda and Christopher Kruegel provides a detailed explanation of the technology used to support the Internet Porn industry, in which it claims provides over $3,000 a second in revenue.
  • The password thicket: technical and market failures in human authentication on the web – Joseph Bonneau and Sören Preibusch (Cambridge) talks about just how poorly many websites manage all of those passwords we reuse.
  • A panel on the credit card payment system, together with a presentation that demonstrated that even credit cards with chips and pins are not secure.  One of the key messages from the presentation was that open standards are critically important to security.
  • On the Security Economics of Electricity Metering – Ross Anderson and Shailendra Fuloria (Cambridge) discussed the various actors in the Smart Grid, their motivations, and some recommendations on the regulatory front.

The papers are mostly available at the web site, as are the presentations.  This stuff is important.  It informs industry as to what behaviors are both rewarding and provide for the social good, as well as where we see gaps or need of improvement in our public policies, especially where technology is well ahead of policy makers’ thinking.