Snowden disclosures reveal NSA abuse

I had no knowledge of the NSA’s programs, but I’m not surprised by most of it.  James Bamford articulated in The Puzzle Palace in 1980 what the NSA was capable of, and it has always been clear to me that they would establish whatever intelligence capabilities they could in order to carry out their mission.  There are several areas that raise substantial concerns:

1.  NSA’s own documents indicate that they intended to interfere with and degrade crypto standards.  That on its own has caused the agency substantial harm to its reputation that will take decades to recover from.  But they haven’t just sullied their own reputation but that of the National Institutes of Standards and Technology (NIST) who are a true braintrust.  Furthermore, they’ve caused the discounting in the discourse of anyone who is technology knowledgeable who have either recently held or currently hold government posts.  I will come back to this issue below.

2.  It is clear that the FISA mechanism just broke down, and that its oversight entirely failed.  Neither Congress nor the Supreme Court took its role seriously.  They all gave so much deference to the executive because of that bugaboo word “terrorism” that they failed to safeguard our way of life.  That to me is unforgivable and I blame both parties for it.  In fact I wrote about this risk on September 12, 2001.  I wrote then:

I am equally concerned about Congress or the President taking liberties with our liberties beyond what is called for. Already, millions of people are stranded away from their loved ones, and commerce has come to a halt. Let’s not do what the terrorists could not, by shrinking in fear in the face of aggression, nor should we surrender our freedom.

Sadly, here we are.

3. There are reports about law enforcement taking intelligence information and scrubbing the origin.  Where I come from we call that tampering with evidence in an egregious attempt to get around those pesky 4th and 5th amendments.

4. The NSA’s activities have caused great harm to U.S. services industry because other nations and their citizens have no notion as to when their information will be shared.  This is keenly true for companies such as Google and Microsoft who, it is reported, were ordered to reveal information.  The great Tip O’Neill said that all politics is local.  That may be true, but in a global market place, all sales are local.

It would be wrong to simply lay blame on the NSA.  They were following their mission.  Their oversight simply failed.  Congress needs oversight.  That is our responsibility.

Smart Watches and wristbands: who is watching the watches?

Over the last few weeks a number of stories have appeared about new “wearable” technology that has the means to track you and your children.  NBC News has a comparison of several “Smart Watches” that are either on the market or could soon be.  Think Dick Tracy.  Some have phones built in, while others can send and receive email.  These things don’t replace smartphones or other PDAs in general because their screen size is so small.  They’re likely not to have much of an Internet browser for that reason, and they may only support a few simultaneous applications on board.

Still, smart watches may find their own nitch.  For instance, a smart watch can carry an RFID that that could be used to control access to garage doors, or perhaps even your front door.  A smart watch might be ideal for certain types medical monitoring, because of its size.  In all likelihood these devices would have limited storage, and would take advantage of various cloud services.  It’s this point that concerns me.

Any time data about you is stored somewhere, you have to know what others are using it for, and what damage can be done if that data falls into the wrong hands.  And so, now let’s consider some of the examples we discussed above in that light:

  1. Voice communications: as one large vendor recently discovered, anything that can be used as a phone can be used as a bug, to listen into conversations.  Having access to a large aggregations of smart watches through the cloud would provide an entire market for attackers, especially if the information is linked to specific individuals.
  2. Medical monitoring: similarly, if you are using a smart watch or any other device for medical monitoring, consider who else might want to act on that information.  Insurance companies and employers immediately leap to mind, but then perhaps so do pharmaceutical companies who might want to market their wares directly to you.
  3. RFID and location-based services.  There have already been instances of people being tracked electronically and murdered.  Children wearing this or a similar device could be kidnapped if the cloud-based services associated with the device is broken into.

This is what concerns me about Disney’s MagicBand.  Disney makes a strong case that having such a band can actually improve service.  But should their information systems be broken into by a hacker, how much might a deranged estranged parent pay that criminal to find out where the child is?

It is the linkage of various attributes that must be considered.  Add location to a name and all of a sudden, a hacked cloud-based service can really do someone damage.  We give away a lot of this information already with many smartphone applications and other devices we carry.  Before we give away more, perhaps we should stop and think about our privacy in broader terms and what is necessary to protect it.  In Europe, the Data Privacy Directive covers a lot of this ground.  But America and other countries are far behind that level of protection.  Further, every new service on a smart device is going to want to monetize every last bit of data they can get.

Securing domain names: what’s it take?

(Courtesy: Joshua Sherurcij) An old padlock When you see a URL like http://www.ofcourseimright.com, your computer needs to  convert the domain name “www.ofcourseimright.com” to and IP address like 62.12.173.114.  As with everything else on the Internet, there are more or less secure ways of doing this.  Even the least secure way is actually pretty hard to attack.  While false information is returned by the DNS all the time, usually it’s benign.  There are still some reasons to move to a more secure domain name system:

  • Attackers are getting more sophisticated, and they may attack resolvers (the services that change names to numbers).  Service providers, hotels, and certain WiFi networks are subject to these sorts of attacks, and they are generally unprepared for them.
  • There are a number of applications that could make use of the domain name system in new ways if it was more secure.

Still it’s good that the current system hasn’t been seriously attacked, because the way the Internet Engineering Task Force (IETF) recommends – DNSSEC – is a major pain in the patoot for mere mortals to use.  There is some good news: some very smart people have begun to document how to manage All of This®.  What’s more, some DNS registrars who manage your domain names for you will, for a price, secure your domain name.  However, doing so truly hands the registrar the keys to the castle.  And so what follows is my adventure into securing a domain name.

http://upload.wikimedia.org/wikipedia/commons/f/f0/DNSSEC_resource_record_check.png

DNSSEC is a fairly complex beast, and this article is not going to explain it all.  The moving parts to consider are how the zone signs the information, how the information is authorized  (in this case the parent zone), and how the resolver validates what it is receiving.  It is important to remember that for any such system there must be a chain of of trust between the publisher and the consumer for the consumer to reasonably believe what the publisher is saying.  DNS accomplishes this by having a hash of the signed record for a zone in its parent zone.  That way you know that somehow the parent (like .com) has reason to believe that information signed with a particular key belongs to the child.

From the child zone perspective (e.g., ofcourseimright.com), there are roughly four steps to securing a domain with DNSSEC:

  1. Generate zone signing key pairs (ZSKs).  These keys will be used to sign and validate each record in the zone.
  2. Generate key signing key pairs (KSKs).  These keys are used to sign and validate the zone signing keys.  They are known in the literature as the Secure Entry Point (SEP) because there aren’t enough acronyms in your life.
  3. Sign the zone.
  4. Generate a hash of the DNSKEY records for the KSKs in the form of a DS record.
  5. Publish the DS in the parent zone.  This provides the means for anyone to confirm which keys belong to your zone.

Steps one through four are generally pretty easy when viewed in a single instance.  The oldest and most widely used name server package, BIND, provides the tools to do this, although the instructions are not what I would consider to be straight forward.

Step five, however, is quite the pain.  To start with, you must find a registrar who will take your DS record.  There are very few that allow this at all.  For “.com” I have found only two.  Furthermore, the means of accepting those records is far from standardized.  For instance, at least one registrar insists that DS records be stored in the child zone.  They are only listed in the parent zone once you’ve used the web interface and selected one of those that can be found.  Another registrar requires that you enter the DS record information in a web interface.  It turns out this isn’t perfect either.  For one thing, it’s error prone, particularly as relates to the validity duration of a signature.

This brings us to the real problem with DNSSEC: both ZSKs and KSKs have expiration dates.  This is based on the well established security notion that with enough computation power, any key can be broken in some period of time.  But this also means that one has to not only repeat steps one through five periodically, but one must do so in such a way that observes the underlying caching semantics of the domain name system. And this is where mere mortals have run away.  I know.  I ran away some time ago.

A tool to manage keying (and rekeying)

But now I’m trying again, thanks to several key developments, the first of which is a new tool called OpenDNSSEC.  OpenDNSSEC takes as input a zone file, writes as output the signed zone, and will rotate keys on a configured schedule. The tool can also generate output that can be fed to other tools to update parent zones, such as “.com”, and it can manage multiple domains.  I manage about six of them myself.

The tool is not entirely “fire and forget”.  To start with, the tool has a substantial number of dependencies, none of which I would call showstoppers, but do take some effort by someone who knows something about installing UNIX software.  For another, as I mentioned, some registrars require that DS records be in the child zone, and OpenDNSSEC doesn’t do this.  That’s a particular pain in the butt because it means you must globally configure the system to not increment the serial number in the SOA record for a zone, then append the DS records to the zone, and then reconfigure OpenDNSSEC to then increment the serial number again.  All of this is possible, but annoying.  Two good solutions to this would be to either modify OpenDNSSEC or change registrars.  The latter is only an option for certain top level domains.

Choosing a Registrar

To make OpenDNSSEC most useful one needss to choose a registrar that allows you to import DS records and also has a programmatic interface, so that OpenDNSSEC can call out to it when doing KSK rotations.  In my investigations, I found such an organization in GKG.NET.  These fine people provide a RESTful interface to manage DS records, that includes adding, deleting, listing, and retrieving key information.  It’s really just what the doctor ordered.  There are other registrars that have various forms of programmatic interfaces, but not so much for the US three-letter TLDs.

The glue

Now this just leaves the glue between OpenDNSSEC and GKG.NET.  What is needed: a library to parse JSON, another to manage HTTP requests, and a whole lot of error handling.  These requirements aren’t that significant, and so one can pick one’s language.  Mine was Perl, and it’s taken about 236 lines (that’s probably 300 in PHP, 400 in Java, and 1,800 in C).

So what to do?

http://upload.wikimedia.org/wikipedia/commons/thumb/a/ad/ThinkingMan_Rodin.jpg/180px-ThinkingMan_Rodin.jpgIf you want to secure your domain name and you don’t mind your registrar holding onto your keys and managing your domain, then just let them do it.  It is by far the easiest approach.  But tools like OpenDNSSEC and registrars like GKG are definitely improving the situation for those who want to hold the keys themselves.  One lingering concern I have about all of this is all the moving parts.  Security isn’t simply about cryptographic assurance.  It’s also about how many things can go wrong, and how many points of attack there are.  All of this proves is that while DNSSEC itself can in theory make names secure, in practice, even though the system has been around for a good few years, the dizzying amount of technical knowledge to keep the system functional is a substantial barrier.  And there will assuredly be bugs found in just about all the software I mentioned, including Perl, Ruby, SQLite, LDNS, libxml2, and of course the code I wrote.  This level of complexity is something that should be further considered, if we really want people to secure their name to address bindings.

Should the ITU Handle Cybersecurity or Cybercrime?

Cybercrime and cybersecurity are two very important topics that are largely being lost in the noise around the American elections, the Arab Spring, or the European banking crisis.  Nevertheless, there is an attempt by the ITU and some governments to take a more active role in this space.

Roughly defined, cybercrime is a crime that occurs or is facilitated by computers.  Cybersecurity is the actions taken to protect against cybercrime.  This includes protection of devices so that they don’t get broken into, and remediation.

Cybercrime itself is a complex issue.  It relates to many things, including fraud, data theft, privacy theft, and just about any criminal endeavor that happened before the term “cyber” ever came to be.  There’s a great paper by a laundry list of Who’s Who in the economics of cybersecurity that proposes methods of estimating actual losses, breaking down crime into various categories.  Statistics in this space are remarkably fluid- that is, there are poor standards for data collection.

As it turns out, there is a treaty on cybercrime, conveniently called The Convention on Cybercrime, developed in the Council of Europe.  Nearly all of Europe, as well as the U.S. and a number of other countries have ratified this treaty, and there other signatories.  Research from the University of Singapore has already shown that either accession to the treaty or even becoming congruent with it will reduce a country’s cybercrime rate.  While the causalities are not clearly explained in that paper, one part is obvious: the first part of the treaty is what amounts to a best practices document for governments, on how they should develop legislation.

The treaty itself is fairly involved and took many years to get as many signatures as it did.  It has to deal with diverse societies who have differing constitutional views on freedom of speech and expression, as well as on due process.

The Secretary General of the ITU and his staff, as well as a few governments, have been under the impression that the ITU could do a better job than what was done by the Council of Europe.  There is little chance of this happening, and in all likelihood, they would make matters worse, if for no other reason (and there are other reasons) that anyone who already signed the Convention would have to reconcile differences between that and whatever would be created by the ITU.

There are other reasons the ITU cannot do better, not least of which is that they lack the technical expertise to actively engage in cybersecurity.  Part of the problem is that most Internet standards are not ITU standards, but come from elsewhere.  While the ITU has any number of standards involving fiber optics management, and good codec support, the computer you’re reading this blog on uses mostly the work of others.  Another reason is that the state of the art in both cybercrime and cybersecurity is rapidly moving, beyond the ITU’s capability to adapt.  Here’s just one example: contrary to what people had thought, the battle ground for cybercrime has not really moved to mobile devices.  As we’ve previously discussed, this has a lot to do with the update mechanisms and business models in play, but the most notable one being that applications on the iPhone in particular are both reviewed by Apple and signed.  The only iPhone you hear about being vulnerable is the one that has been cracked by the owner, and that doesn’t account for a whole lot.

One WCIT proposal that refers to spam as a threat demonstrates how far off some governments are on the subject.  Spam itself has never really been much of a threat, but more of an annoyance.  80-90% of it is never delivered to the end user, and most Evil Doers have moved on to more sophisticated approaches, such as spear phishing.  Worse, the ITU-T’s study group 17 had to take years simply to come up with a definition of spam, when it really was a problem.

This is not to say that the ITU shouldn’t have a role to play with cybersecurity.  The ITU has extraordinarily access to governments of developing countries, and can work with them to improve their cybersecurity posture, through training and outreach.  In fact they do some of this in their Development or ITU-D Sector.  One thing that the D sector has done recently has been to put developing governments in touch with FIRST, the organization that coordinates discussion among Computer Incident Response Teams or CIRTs.  But the ITU should give up any idea that it can play more of a role than outreach and capacity building, all of which should be done in consultation with actual experts.

Are bad iPhone maps a security problem?

A while ago I talked about business models and how they impact security.  The key thing then was that Apple had a direct path to the consumer, which drove update rates of iOS very quickly, in comparison to Android.  Implicit in all of that was that consumers would find a reason to upgrade to the latest software.

Now we see a new version 6 of iOS that has what can only be described as a miserable replacement for Google Maps, as well as a number of reported problems with WiFi connectivity.  All of a sudden, the tables are turned.  Are the 200 new features found in iOS worth risking one’s ability to use WiFi or have accurate mapping information?  Note that the question makes no reference to security.  That’s because consumers don’t care about that.

So, here’s the thing to watch, and Google will be watching very closely: what is the adoption rate of iOS version 5 as compared to its predecessor?  The converted have already moved over.  Now it’s time for the rest of us.  Will we or won’t we?  I already have decided to wait for a “.0.1” version of iOS 6, as my iPhone works fine as is, and none of the new features really seem so interesting, such that I want to risk breaking WiFi or my maps.  Note again, I’m not even mentioning security.