Interesting Geoff Huston Posting on CircleID

Geoff Huston has established himself as perhaps the foremost authority on IP address markets.  A senior researcher at APNIC, Geoff has tracked this issue for over a decade.  He has recently posted a new blog entry at CircleID, to which I’ve commented.  Here’s what I wrote there:

The fundamental basis for the article above is a lack of transparency within IP address markets.  This is something that Bill Lehr, Tom Vest, and I worried about in our contribution to TPRC in 2008.

Amongst other things, transparency or its lack has the following effects:

  • Assuming it is a goal, efficiency in markets demands transparency.  When markets lack transparency, neither the buyer nor the seller know if they have gotten a good deal, because it could be that there existed either a buyer who would have paid for more, or a seller who would have sold for less, who was simply not identified.  Is $10 per address a good price?  There is at lest a tidbit of information from some of the brokers that indicates wide variance in the cost of IP address blocks.  Whether that information is accurate, who cannot say?  It is not required to be so.
  • Network administrators and owners should be making informed decisions about how and when to move to IPv6.  Absent pricing information regarding v4, there is uncertainty that is difficult to price.  In this sense, hiding pricing information may actually encourage IPv6 deployment.  Keep in mind that large institutions require years if not decades to make this sort of transition.  Were I them, given the increased number of devices (if you can believe the numbers above, and I suggest that we take them with a grain of salt), I would start now to get out of this rigamarole.  Heck, even with transparency, that only tells you today’s price, and not tomorrow’s.  Certainly it is well worth researching methods to price this risk.
  • It is important to know if there is an actor who is attempting to corner the market.  Proper registration of purchases and sales provides an overview of whether dominant players are acquiring addresses beyond the needs of their customer base.  Such acquisitions would have the impact of increasing costs for new entrants.
  • Finally, the Internet Technical Community (whoever we are) need to know if new entrants are in fact unable to access the Internet because IPv4 addresses are too high, if we want to see the safe and secure growth of the Internet everywhere.

The funny aspect of all of this is that governments may already be able to track some pricing information retrospectively through, of all things, compulsory capital asset sale reports, such as the U.S. Form 1040 Schedule D.  However, in general this information is confidential and not very fresh, and hence not sufficient to advance policy discussions.

WCIT, the Internet, the ITU-T, and what comes next

Courtesy of Mike Blanche of Google, the map on the left shows in black countries who signed the treaty developed at WCIT, countries in red who indicated they wouldn’t sign the treaty, and other countries who are thinking about it.  A country can always change its mind.

Over the next few weeks that map will change, and the dust will settle.  The fact that the developed world did not sign the treaty means that the Internet will continue to function relatively unmolested, at least for a time, and certainly between developed countries.   As the places that already heavily regulate telecommunications are the ones who are signing the treaty, its impact will be subtle.  We will continue to see international regulatory challenges to the Internet, perhaps as early as 2014 at the ITU’s next Plenipotentiary conference.  Certainly there will be heated debate at the next World Telecommunication Policy Forum.

This map also highlights that the ITU is the big loser in this debacle.  Secretary General Hamadoun Touré claimed that the ITU works by consensus.  It’s just not so, when matters are contentious, and quite frankly he lacked the power and influence to bring all the different viewpoints together.  This loss of consensus has split the Union, and has substantial ramifications.  There is no shared vision or goal, and this will need to be addressed at the next Plenipotentiary conference.

With different sectors and diverse participants, it is hard to lump the Union into a single group. Nations come together to manage radio spectrum in the ITU-R.  That’s important because spectrum crosses borders, and needs to be managed.  In the ITU-D, both developing and developed countries come together to have a dialog on key issues such as cybersecurity and interoperability.  The work of the -D sector needs to be increased.  Most notably, their Programmes need even more capability, and the study groups should be articulating more clearly the challenges and opportunities developing countries face.

The -T standardization sector is considerably more complex.  It’s important not to lose sight of the good work that goes on there. For example, many of the audio and video codecs we use are standardized in ITU-T study group 16.  Fiber transmission standards in study group 15 are the basis for long haul transmission.  Study group 12 has some of the foremost experts in the world on quality of service management.  However, the last six years have demonstrated a fundamental problem:

At the end of the day, when conflicts arise, and that is in the nature of standards work, because of one country one vote, the ITU-T is catering to developing countries who by their nature are not at the leading edge of technology.  The ITU-T likes to believe it holds a special place among standards organizations, and yet there have been entire study groups whose work have been ignored by the market and governments alike.  To cater to those who are behind the Rogers adoption curve is to chase away those who are truly in front.  This is why you don’t see active participation from Facebook, Twitter, or Google in ITU-T standards, and why even larger companies like Cisco, IBM, HP, and others prefer to do protocol work largely elsewhere.1

So what can be done?

In practice study groups in ITU-T serve four functions:

  • develop technical standards, known as recommendations;
  • provide fora for vertical standards coordination;
  • direct management of a certain set of resources, such as the telephone number space;
  • establish accounting rates and regulatory rules based on economics and policy discussions.

The first two functions are technical.  The other are political.  The invasion of political processes into technical standards development is also a fundamental issue.  I offer the above division to demonstrate a possible way forward to be considered.  The role of the -D sector should be considered in all of this.  Hearing from developing countries about the problems they are facing continues to be important.

The ITU-T and its member states will have the opportunity to consider this problem over the next two years, prior to its plenipotentiary meeting.  There is a need for member states to first recognize the problem, and to address it in a forthright manner.

What Does the Internet Technical Community Need to Do?

For the most part, we’re at this point because the Internet Technical Community has done just what it needed to do.  After all, nobody would care about regulating a technology that is not widely deployed.  For the most part, the Internet Technical Community should keep doing what we’re doing.  That does not mean there isn’t room for improvement.

Developing countries have real problems that need to be addressed. It takes resources and wealth to address cybersecurity, for example. To deny this is to feed into a political firestorm.  Therefore continued engagement and understanding are necessary.  Neither can be found at a political conference like WCIT.  WCIT has also shown that by the time people show up at such places, their opinions are formed.

Finally, we must recognize an uncomfortable truth with IPv4.  While Africa, Latin & South America still have free access to IPv4 address space, the rest of the world has exhausted its supply.  Whenever a scarce resource is given a price, there will be haves and have nots.  When the have nots are poor, and they often are, it can always be classed as an inequity.  In this case, there truly is no need for such inequity, because IPv6 offers everyone ample address space.  Clearly governments are concerned about this.  The private sector had better regulate itself before it gets (further) regulated.

Another Uncomfortable Truth

Developing countries are also at risk in this process, and perhaps most of all.  They have been sold the idea that somehow “bridging the standardization gap” is a good thing.   It is one thing to participate and learn.  It is another to impede leading edge progress through bloc votes.  Leading edge work will continue, just elsewhere, as it has.

1 Cisco is my employer, but my views may not be their views (although that does happen sometimes).

IPv4 address shortage: Who was the first to become concerned?

My own answer is “I don’t know”.  I only know that there were a few of us thinking about the problem in 1989.  Roy Smith raised the issue on the TCP-IP mailing list on November 25th of that year with this message:

Date:      25 Nov 88 14:56:57 GMT
From:      roy@phri.UUCP (Roy Smith)
To:        comp.protocols.tcp-ip
Subject:   Running out of Internet addresses?
	Has anybody made any serious estimates of how long it will be
before we run out of 32-bit IP addresses?  (Silly question; I'm sure a very
great amount of thought has been given to it by many people.)  With the
proliferation of such things as diskless workstations, each of which has
its own IP address (not to mention terminal multiplexors which eat up one
IP address per tty line!), it seems like it won't be too long before we
just plain run out of addresses.

	Yes, I know that 2^32 is a hell of a big number, but it seems like
we won't get anywhere near that number of assigned addresses before we
effectively run out because most nets are sparsely populated.  My little
bit of wire, for example, has 256 allocated addresses yet I'm only actually
using 30 or so.
-- 
Roy Smith, System Administrator
Public Health Research Institute
{allegra,philabs,cmcl2,rutgers}!phri!roy -or- phri!roy@uunet.uu.net
"The connector is the network"

Back then we used IP addresses in a considerably sparser way than we do today.  That message kicked off a lengthy discussion in which nobody seriously was in denial about the potential for a problem.  You can find the whole archive of the exchange here.  There were two concepts that were touched upon.  The first was whether or not we could use the so-called “Class E” space (240.0.0.0/4).  I and others gave this serious thought at the time.  However, the related issue which won the day was that fixed address lengths were an important property to be maintained.  Vint Cerf raised that design consideration as a question.  He also raised the possibility of using variable-length OSI addresses.

Here comes World IPv6 Day!

As you may have read in the press some time ago, the world is running out of IP addresses.  Really the world is running out of the current version IP addresses.  An IP address is the means by which your computer and my computer can communicate with each other.  Addresses are similar to phone numbers in that if we each have a unique number we both can call each other.

How is it we’ve run out?  Quite simply the IP version 4 address size is fixed at 32 bits, which allows for at most a little over 4 billion simultaneous computers to connect.  Through the use of some sneaky tricks we are able to connect well more than 4 billion under the assumption that not device needs to be able to communicate with ever other device, but that game is getting a bit overplayed.

And so over fifteen years ago, the Internet Engineering Task Force (IETF) created IPv6, which has enough address space to stick an address on every speck of sand we have in the world.  More precisely IPv6 can handle 2128 or 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. 

NOW THAT’S A LOT OF PASTA!

Nobody wanted IPv6 way back then when we had plenty of IPv4 address space, but now that we’re out of IPv4 addresses, it’s moving day. That’s because we’ve become mobile, and computers have gotten smaller.  Not only can a cell phone access the Internet, but so can your printer,  a car, a boat, a camera, your television, washing machine, many game systems, and many other things.

Tomorrow is World IPv6 Day. Many service providers and web sites will be enabling the next generation Internet Protocol tomorrow to see what works and what breaks.  Will this inconvenience you even just a little?  Probably not.  Here’s why: your home gateway almost certainly doesn’t support IPv6, unless you’re a geek like me, in which case IPv6 day might inconvenience me.  But I had to go to quite some inconvenience already to get IPv6 into my home, so what’s just a little bit more?

Anyway, it’s all one big test to see how painful moving to IPv6 really is, and to see what breaks and what needs fixing.  As service providers and web sites kink out bugs you’ll be hearing more about IPv6.  Eventually, much like you did when you moved to high definition television, you’ll probably need a new router.  If all goes well, the only difference you’ll notice is that eventually services like Skype and iChat AV will improve.

By the way, this blog is IPv6-enabled!

How to get a Time Capsule to actually work in IPv6 without wireless

I have an unusual home configuration, in that I have a routed network.  If you don’t know what this means, stop reading now as you are wasting your time.  While the Apple Time Capsule advertises IPv6 capability, getting it working is rather difficult.  To start with, if you do not use the wireless capability of the device, the controls are really non-obvious.  For another, the Time Capsule appears to ignore the default route capability in routing advertisements.  Hence a manual configuration is required:

Time Capsule Configuration

Looking to the left, one must select “Router” from the IPv6 mode and not “Host” as one might logically expect.  Then, because RAs are not being handled properly, one must manually enter the default route (the long way).

Finally, because you are supposed to be routing, you need to enter some address for the “LAN” side.  My prefix is 2001:8a8:1006::/48.  Note I’ve dedicated a bogus network ::8/64  to the effort.  All of this allows me to do what should have happened automatically; not your typical Apple Plug-N-Play style, is it?  For a company that claims to be IPv6 Ready, I’d say Apple still has a ways to go.  Sadly, they’re better than most.