Smart Watches and wristbands: who is watching the watches?

Over the last few weeks a number of stories have appeared about new “wearable” technology that has the means to track you and your children.  NBC News has a comparison of several “Smart Watches” that are either on the market or could soon be.  Think Dick Tracy.  Some have phones built in, while others can send and receive email.  These things don’t replace smartphones or other PDAs in general because their screen size is so small.  They’re likely not to have much of an Internet browser for that reason, and they may only support a few simultaneous applications on board.

Still, smart watches may find their own nitch.  For instance, a smart watch can carry an RFID that that could be used to control access to garage doors, or perhaps even your front door.  A smart watch might be ideal for certain types medical monitoring, because of its size.  In all likelihood these devices would have limited storage, and would take advantage of various cloud services.  It’s this point that concerns me.

Any time data about you is stored somewhere, you have to know what others are using it for, and what damage can be done if that data falls into the wrong hands.  And so, now let’s consider some of the examples we discussed above in that light:

  1. Voice communications: as one large vendor recently discovered, anything that can be used as a phone can be used as a bug, to listen into conversations.  Having access to a large aggregations of smart watches through the cloud would provide an entire market for attackers, especially if the information is linked to specific individuals.
  2. Medical monitoring: similarly, if you are using a smart watch or any other device for medical monitoring, consider who else might want to act on that information.  Insurance companies and employers immediately leap to mind, but then perhaps so do pharmaceutical companies who might want to market their wares directly to you.
  3. RFID and location-based services.  There have already been instances of people being tracked electronically and murdered.  Children wearing this or a similar device could be kidnapped if the cloud-based services associated with the device is broken into.

This is what concerns me about Disney’s MagicBand.  Disney makes a strong case that having such a band can actually improve service.  But should their information systems be broken into by a hacker, how much might a deranged estranged parent pay that criminal to find out where the child is?

It is the linkage of various attributes that must be considered.  Add location to a name and all of a sudden, a hacked cloud-based service can really do someone damage.  We give away a lot of this information already with many smartphone applications and other devices we carry.  Before we give away more, perhaps we should stop and think about our privacy in broader terms and what is necessary to protect it.  In Europe, the Data Privacy Directive covers a lot of this ground.  But America and other countries are far behind that level of protection.  Further, every new service on a smart device is going to want to monetize every last bit of data they can get.

Mark Crispin: 1956 – 2012

Mark CrispinMark Crispin passed away on the 28th of December. While I didn’t know him well, Mark was a very important visionary in the area of Internet applications, and Email and character sets in particular.

I first enjoyed his work as a user of the MM program on TOPS-20, upon which he based the design of IMAP. MM featured strong searching and marking capabilities, as well as all the customization a person could want. It was through MM that people individualized their messages with funny headers or a cute name. And it was all so easy to use. Mark was constantly reminding us about that, and how UNIX’s interface could always stand improvement. Mark was an unabashed TOPS-20 fan.

Before the world had fully converged on vt100 semantics, Mark worked to standardize SUPDUP and the SUPDUP option. He was also early to recognize the limitations of a single host table. Mark’s sense of humor brought us RFC-748, the Telnet randomly-lose option, which was the first April 1 RFC. He also wrote another such RFC for UTF-9 and UTF-10.

Most of us benefit from Mark’s work today through our use of IMAP, which followed Einstein’s advice by having a protocol that was as simple as possible to tackle the necessary problems, but no simpler. We know this because our first attempt was POP, which was too simple. Mark knew he had hit the balance right because he made benefited from his experience with lots of running code and direct work with many end users.

I will miss his quirkiness, his cowboy boots, and his recommendations for the best Japanese food in a town where the IETF would visit, and I will miss the contributions he should have had more time to make.

Securing domain names: what’s it take?

(Courtesy: Joshua Sherurcij) An old padlock When you see a URL like http://www.ofcourseimright.com, your computer needs to  convert the domain name “www.ofcourseimright.com” to and IP address like 62.12.173.114.  As with everything else on the Internet, there are more or less secure ways of doing this.  Even the least secure way is actually pretty hard to attack.  While false information is returned by the DNS all the time, usually it’s benign.  There are still some reasons to move to a more secure domain name system:

  • Attackers are getting more sophisticated, and they may attack resolvers (the services that change names to numbers).  Service providers, hotels, and certain WiFi networks are subject to these sorts of attacks, and they are generally unprepared for them.
  • There are a number of applications that could make use of the domain name system in new ways if it was more secure.

Still it’s good that the current system hasn’t been seriously attacked, because the way the Internet Engineering Task Force (IETF) recommends – DNSSEC – is a major pain in the patoot for mere mortals to use.  There is some good news: some very smart people have begun to document how to manage All of This®.  What’s more, some DNS registrars who manage your domain names for you will, for a price, secure your domain name.  However, doing so truly hands the registrar the keys to the castle.  And so what follows is my adventure into securing a domain name.

http://upload.wikimedia.org/wikipedia/commons/f/f0/DNSSEC_resource_record_check.png

DNSSEC is a fairly complex beast, and this article is not going to explain it all.  The moving parts to consider are how the zone signs the information, how the information is authorized  (in this case the parent zone), and how the resolver validates what it is receiving.  It is important to remember that for any such system there must be a chain of of trust between the publisher and the consumer for the consumer to reasonably believe what the publisher is saying.  DNS accomplishes this by having a hash of the signed record for a zone in its parent zone.  That way you know that somehow the parent (like .com) has reason to believe that information signed with a particular key belongs to the child.

From the child zone perspective (e.g., ofcourseimright.com), there are roughly four steps to securing a domain with DNSSEC:

  1. Generate zone signing key pairs (ZSKs).  These keys will be used to sign and validate each record in the zone.
  2. Generate key signing key pairs (KSKs).  These keys are used to sign and validate the zone signing keys.  They are known in the literature as the Secure Entry Point (SEP) because there aren’t enough acronyms in your life.
  3. Sign the zone.
  4. Generate a hash of the DNSKEY records for the KSKs in the form of a DS record.
  5. Publish the DS in the parent zone.  This provides the means for anyone to confirm which keys belong to your zone.

Steps one through four are generally pretty easy when viewed in a single instance.  The oldest and most widely used name server package, BIND, provides the tools to do this, although the instructions are not what I would consider to be straight forward.

Step five, however, is quite the pain.  To start with, you must find a registrar who will take your DS record.  There are very few that allow this at all.  For “.com” I have found only two.  Furthermore, the means of accepting those records is far from standardized.  For instance, at least one registrar insists that DS records be stored in the child zone.  They are only listed in the parent zone once you’ve used the web interface and selected one of those that can be found.  Another registrar requires that you enter the DS record information in a web interface.  It turns out this isn’t perfect either.  For one thing, it’s error prone, particularly as relates to the validity duration of a signature.

This brings us to the real problem with DNSSEC: both ZSKs and KSKs have expiration dates.  This is based on the well established security notion that with enough computation power, any key can be broken in some period of time.  But this also means that one has to not only repeat steps one through five periodically, but one must do so in such a way that observes the underlying caching semantics of the domain name system. And this is where mere mortals have run away.  I know.  I ran away some time ago.

A tool to manage keying (and rekeying)

But now I’m trying again, thanks to several key developments, the first of which is a new tool called OpenDNSSEC.  OpenDNSSEC takes as input a zone file, writes as output the signed zone, and will rotate keys on a configured schedule. The tool can also generate output that can be fed to other tools to update parent zones, such as “.com”, and it can manage multiple domains.  I manage about six of them myself.

The tool is not entirely “fire and forget”.  To start with, the tool has a substantial number of dependencies, none of which I would call showstoppers, but do take some effort by someone who knows something about installing UNIX software.  For another, as I mentioned, some registrars require that DS records be in the child zone, and OpenDNSSEC doesn’t do this.  That’s a particular pain in the butt because it means you must globally configure the system to not increment the serial number in the SOA record for a zone, then append the DS records to the zone, and then reconfigure OpenDNSSEC to then increment the serial number again.  All of this is possible, but annoying.  Two good solutions to this would be to either modify OpenDNSSEC or change registrars.  The latter is only an option for certain top level domains.

Choosing a Registrar

To make OpenDNSSEC most useful one needss to choose a registrar that allows you to import DS records and also has a programmatic interface, so that OpenDNSSEC can call out to it when doing KSK rotations.  In my investigations, I found such an organization in GKG.NET.  These fine people provide a RESTful interface to manage DS records, that includes adding, deleting, listing, and retrieving key information.  It’s really just what the doctor ordered.  There are other registrars that have various forms of programmatic interfaces, but not so much for the US three-letter TLDs.

The glue

Now this just leaves the glue between OpenDNSSEC and GKG.NET.  What is needed: a library to parse JSON, another to manage HTTP requests, and a whole lot of error handling.  These requirements aren’t that significant, and so one can pick one’s language.  Mine was Perl, and it’s taken about 236 lines (that’s probably 300 in PHP, 400 in Java, and 1,800 in C).

So what to do?

http://upload.wikimedia.org/wikipedia/commons/thumb/a/ad/ThinkingMan_Rodin.jpg/180px-ThinkingMan_Rodin.jpgIf you want to secure your domain name and you don’t mind your registrar holding onto your keys and managing your domain, then just let them do it.  It is by far the easiest approach.  But tools like OpenDNSSEC and registrars like GKG are definitely improving the situation for those who want to hold the keys themselves.  One lingering concern I have about all of this is all the moving parts.  Security isn’t simply about cryptographic assurance.  It’s also about how many things can go wrong, and how many points of attack there are.  All of this proves is that while DNSSEC itself can in theory make names secure, in practice, even though the system has been around for a good few years, the dizzying amount of technical knowledge to keep the system functional is a substantial barrier.  And there will assuredly be bugs found in just about all the software I mentioned, including Perl, Ruby, SQLite, LDNS, libxml2, and of course the code I wrote.  This level of complexity is something that should be further considered, if we really want people to secure their name to address bindings.

WCIT, the Internet, the ITU-T, and what comes next

Courtesy of Mike Blanche of Google, the map on the left shows in black countries who signed the treaty developed at WCIT, countries in red who indicated they wouldn’t sign the treaty, and other countries who are thinking about it.  A country can always change its mind.

Over the next few weeks that map will change, and the dust will settle.  The fact that the developed world did not sign the treaty means that the Internet will continue to function relatively unmolested, at least for a time, and certainly between developed countries.   As the places that already heavily regulate telecommunications are the ones who are signing the treaty, its impact will be subtle.  We will continue to see international regulatory challenges to the Internet, perhaps as early as 2014 at the ITU’s next Plenipotentiary conference.  Certainly there will be heated debate at the next World Telecommunication Policy Forum.

This map also highlights that the ITU is the big loser in this debacle.  Secretary General Hamadoun Touré claimed that the ITU works by consensus.  It’s just not so, when matters are contentious, and quite frankly he lacked the power and influence to bring all the different viewpoints together.  This loss of consensus has split the Union, and has substantial ramifications.  There is no shared vision or goal, and this will need to be addressed at the next Plenipotentiary conference.

With different sectors and diverse participants, it is hard to lump the Union into a single group. Nations come together to manage radio spectrum in the ITU-R.  That’s important because spectrum crosses borders, and needs to be managed.  In the ITU-D, both developing and developed countries come together to have a dialog on key issues such as cybersecurity and interoperability.  The work of the -D sector needs to be increased.  Most notably, their Programmes need even more capability, and the study groups should be articulating more clearly the challenges and opportunities developing countries face.

The -T standardization sector is considerably more complex.  It’s important not to lose sight of the good work that goes on there. For example, many of the audio and video codecs we use are standardized in ITU-T study group 16.  Fiber transmission standards in study group 15 are the basis for long haul transmission.  Study group 12 has some of the foremost experts in the world on quality of service management.  However, the last six years have demonstrated a fundamental problem:

At the end of the day, when conflicts arise, and that is in the nature of standards work, because of one country one vote, the ITU-T is catering to developing countries who by their nature are not at the leading edge of technology.  The ITU-T likes to believe it holds a special place among standards organizations, and yet there have been entire study groups whose work have been ignored by the market and governments alike.  To cater to those who are behind the Rogers adoption curve is to chase away those who are truly in front.  This is why you don’t see active participation from Facebook, Twitter, or Google in ITU-T standards, and why even larger companies like Cisco, IBM, HP, and others prefer to do protocol work largely elsewhere.1

So what can be done?

In practice study groups in ITU-T serve four functions:

  • develop technical standards, known as recommendations;
  • provide fora for vertical standards coordination;
  • direct management of a certain set of resources, such as the telephone number space;
  • establish accounting rates and regulatory rules based on economics and policy discussions.

The first two functions are technical.  The other are political.  The invasion of political processes into technical standards development is also a fundamental issue.  I offer the above division to demonstrate a possible way forward to be considered.  The role of the -D sector should be considered in all of this.  Hearing from developing countries about the problems they are facing continues to be important.

The ITU-T and its member states will have the opportunity to consider this problem over the next two years, prior to its plenipotentiary meeting.  There is a need for member states to first recognize the problem, and to address it in a forthright manner.

What Does the Internet Technical Community Need to Do?

For the most part, we’re at this point because the Internet Technical Community has done just what it needed to do.  After all, nobody would care about regulating a technology that is not widely deployed.  For the most part, the Internet Technical Community should keep doing what we’re doing.  That does not mean there isn’t room for improvement.

Developing countries have real problems that need to be addressed. It takes resources and wealth to address cybersecurity, for example. To deny this is to feed into a political firestorm.  Therefore continued engagement and understanding are necessary.  Neither can be found at a political conference like WCIT.  WCIT has also shown that by the time people show up at such places, their opinions are formed.

Finally, we must recognize an uncomfortable truth with IPv4.  While Africa, Latin & South America still have free access to IPv4 address space, the rest of the world has exhausted its supply.  Whenever a scarce resource is given a price, there will be haves and have nots.  When the have nots are poor, and they often are, it can always be classed as an inequity.  In this case, there truly is no need for such inequity, because IPv6 offers everyone ample address space.  Clearly governments are concerned about this.  The private sector had better regulate itself before it gets (further) regulated.

Another Uncomfortable Truth

Developing countries are also at risk in this process, and perhaps most of all.  They have been sold the idea that somehow “bridging the standardization gap” is a good thing.   It is one thing to participate and learn.  It is another to impede leading edge progress through bloc votes.  Leading edge work will continue, just elsewhere, as it has.

1 Cisco is my employer, but my views may not be their views (although that does happen sometimes).

Failure in Dubai: WCIT falls apart

After over a year’s worth of preparation on the part of nearly every country on earth, today the WCIT conference fell apart, with the U.S., Canada, UK, and other countries refusing to sign the new International Telecommunication Regulations (ITRs).  They all had good reason to not sign.

Never fear!  The Internet is still here and open for business.  Treaties have failed before and yet the world goes on.

This treaty-

  • put into play regulation of Internet Service Providers (ISPs), and would have required governments to impose international obligations on them.
  • attempted to add claims about human rights,
  • challenged the role of the U.N. security council, and whether U.N. sanctions could apply to telecommunications.
  • went headlong into cybersecurity and spam, without any real basis or understanding for what it would mean to do so.
  • worst of all ran headlong into Internet governance, challenging the flexible approach that has grown the network from nothing to 2.5 billion people.

This was never going to be an easy conference.  It has been clear for many years that the developing world has very different views from the developed world, and the views of Russia, China, and Iran are quite different from those of the U.S., Canada, and Europe.  In the end, the gulf between these worlds was too great.

I extend my sincere thanks to those who spent many tireless hours in Dubai in defense of the Internet.  A partial list includes Markus Kummer, Sally Wentworth, Karen Mulberry, and Leslie Daigle of the Internet Society; Chip Sharp, KY Hong, Hosein Badran, and Robert Pepper of Cisco Systems; Adam Gosling of APNIC; Patrik Fältström of NetNod; Phil Rushton of BT; Mike Blanche, Sarah Falvey,and Aparna Sridhar of Google; Tom Walsh of Juniper; Anders Jonsson of the Swedish Administration; Dr. Richard Beaird, James Ennis, Vernita Harris, Ashley Heineman, Joanne Wilson, Franz Zichy, and many others from the American Administration; and Dr. Bruce Gracie, Avellaneda, and Martin Proulx from the Canadian administration.

These people spent many weeks away from their families, both in Dubai and in preparation.  This was not the result they were hoping for.

A special thanks to Vint Cerf, who travels the earth to keep the Internet bringing communications to all.