U.S. Currency War with China?

This short piece is on News Hour introduces us to the politics of currency manipulation. A government who keeps its currency artificially low is in essence dumping their goods and services on every other country, thereby taking jobs from those countries.   The hard part is determining when prices are really artificially low.  While it is in the end a political opinion, we have some hints as to when the price of a currency is really lower than it should be.  One of those is when per-capita income is higher than another country’s and yet there is still a net export of goods and services.  According to the International Monetary Fund, for 2010, the U.S. had the 7th highest per capita income of $46,860, while China came in a distant 94th with $7,544 per person.  China’s trade surplus for that same year was $190 billion.  Were we to attribute all of that to the United States, that would add about $680 to the U.S. per capita income.

On the other hand, Perhaps, on the other hand, the U.S. currency is too high. After all, the U.S. trade deficit for 2010 was $498 billion.  But then what do we do about it? To lower the value of the dollar you simply print more. Of course that risks inflation. And if you do print more, why shouldn’t another country respond by printing more of its own currency?

It’s a messy business, and given the amount of money to be made or lost in speculating on currency, the U.S. Senate should be very careful about the sort of laws they pass, particularly ones that in some way ties the Treasury Department’s arms in dealing with currency crises.  Thar be dragons here.

 

Is Google Green or Wasteful?

Google Smoke StackToday’s New York Times has an interesting article about how Google uses enough electricity on its own to power 200,000 American homes.  Google claims that it’s using that energy so that consumers don’t have to, and that in fact they do so more efficiently than consumers in aggregate would.  There’s some small merit to the argument they’re making, but it isn’t obvious at first glance.

Google’s argument is that they’re saving you a trip to the library when you do a search.  That might be true sometimes, of course, but the chances are you didn’t go anyway.  For one, you might have instead picked up your local yellow pages, or an Atlas, or written a postcard.  But yes, sometimes you might have gone to the library– with your car.

Often times we the consumers get tricked into thinking that all big numbers are meaningful.  Let me give you an example from the networking industry.  It is not unheard of for a high power Internet router to suck a lot of power.  A fully configured Cisco CRS-1 uses about 8Kw of power.  These are big pieces of hardware that can each serve the needs of thousands of customers.  Perhaps there are 2,000 of them and their ilk in America, and probably less.  And so at any moment that’s about 16 megawatts worth of power.  Big number, right?  And so let’s say that we found a way to cut their power consumption by 10%.  Per box, that’s 800 watts.  That’s a lot of power, right?

Now let’s look at a consumer router.  You know the ones- Linksys, D-Link, etc.  They use about 8 watts of power.  Of course there are about 89 million of those devices out there[1][2].  That means that savings of a single watt of power in those devices saves 89 megawatts.

Why do I mention all of this?  Who cares about how much power Google consumes?!  The real issue is the computer you’re reading this post with.  There are orders of magnitude more of those than there are of the computers that Google uses to return a search result or your email.

But what do you get for that energy usage?  Well, you don’t have to have your bills sent to you in paper copy, and you don’t have to use the ink to write a check, and you don’t have to have as many checks printed, and you don’t need to receive paper copies of the TV guide, and you might not even use DVDs any more if you’re using NetFlix.  In fact, you probably didn’t read the New York Times article on printed paper!  You don’t need to fax, because you can email, and you probably don’t even know how good your handwriting is, these days, because you’ve been typing.

This is not to say that the technology sector shouldn’t do a better job at recycling or energy use.  And it’s good that we look at the total cost of what we consume.  But let’s also recognize the benefits.

Web (in)Security and What Can Be Done

We all like to think that web security is perfect, but we all know better.  You know about spam, phishing, and all manner of malware.  You probably run a virus scanner on your computer.  But what you don’t expect and shouldn’t expect is that the core of our security system would have a flaw.  It does, and has, from the beginning.  What’s more, it’s a known flaw.

How is it your browser decides to trust a site, or to show that lovely lock icon and perhaps a green URL bar when your communication is both encrypted and verified to be to a specific end point?  The simple answer is that your browser provider, Microsoft, Mozilla, Apple, or Google, has made a decision on your behalf that – at least as initially configured – your browser will trust a certain set of authorities– certificate authorities (CAs)– who will validate others.

One such certificate authority got hacked.  Badly.  And because they were trusted by your browser, so might you have been.  Here’s how it works.

  • When you access a URL that begins with “https”, a certificate is sent by that site that is signed by one of the trusted CAs, saying “yes, I agree that this is google.com,” (for example).  If someone gets in between you and Google, they won’t have the private key associated with that certificate, and they won’t be able to validate to your browser.
  • If someone breaks into a CA and gets a certificate for “google.com” (again, for example), and then gets between you and the real Google, they will be able to masquerade.  It doesn’t matter which CA it is, as long as your browser trusts it.  Google needn’t have any relationship with that CA.

This is what happened with DigiNotar.  Not only did they get hacked, but they didn’t notice.  They didn’t have sufficient controls in place to even spot the attack.  That they should have had.

But now there’s something else we can do.  In the Internet Engineering Task Force (IETF), a few folks led by a gentleman by the name of Paul Hoffman have developed an approach where sites like Google can effectively register which certificates are valid for them in an separate alternative authority that we largely trust, the Domain Name System (DNS).  You use DNS to convert site names like ofcourseimright.com to IP addresses like 10.1.1.1.

The group working on it is called “dane“.  Had the dane mechanism been in place in the browser, the attack on Diginotar and Google would have failed, even if Google was a customer of Diginotar (which they weren’t).

When we speak of security we always discuss defense in depth.  That is– never rely on exactly one mechanism to protect you, because at some point it will surely break.  In this case, the attacker needed to (a) compromise the CA and (b) get in between the service and the end user to succeed.  Had dane been in place, atop (a) and (b), the attacker would also have to have compromised Google’s DNS for the attack to succeed.  That’s likely even harder than compromising a CA.

Dane has another potential benefit: in the long run, it may get browsers completely out of the business of telling you who to trust, or it will extremely limit that trust.

This attack also demonstrates that as threats evolve our response to those threats evolves.  Here we understood the threat, but just didn’t get the work done fast enough before a CA was compromised.  I still call this a win, as I think we can expect to see dane even faster than we expected before the attack.

IPv4 address shortage: Who was the first to become concerned?

My own answer is “I don’t know”.  I only know that there were a few of us thinking about the problem in 1989.  Roy Smith raised the issue on the TCP-IP mailing list on November 25th of that year with this message:

Date:      25 Nov 88 14:56:57 GMT
From:      roy@phri.UUCP (Roy Smith)
To:        comp.protocols.tcp-ip
Subject:   Running out of Internet addresses?
	Has anybody made any serious estimates of how long it will be
before we run out of 32-bit IP addresses?  (Silly question; I'm sure a very
great amount of thought has been given to it by many people.)  With the
proliferation of such things as diskless workstations, each of which has
its own IP address (not to mention terminal multiplexors which eat up one
IP address per tty line!), it seems like it won't be too long before we
just plain run out of addresses.

	Yes, I know that 2^32 is a hell of a big number, but it seems like
we won't get anywhere near that number of assigned addresses before we
effectively run out because most nets are sparsely populated.  My little
bit of wire, for example, has 256 allocated addresses yet I'm only actually
using 30 or so.
-- 
Roy Smith, System Administrator
Public Health Research Institute
{allegra,philabs,cmcl2,rutgers}!phri!roy -or- phri!roy@uunet.uu.net
"The connector is the network"

Back then we used IP addresses in a considerably sparser way than we do today.  That message kicked off a lengthy discussion in which nobody seriously was in denial about the potential for a problem.  You can find the whole archive of the exchange here.  There were two concepts that were touched upon.  The first was whether or not we could use the so-called “Class E” space (240.0.0.0/4).  I and others gave this serious thought at the time.  However, the related issue which won the day was that fixed address lengths were an important property to be maintained.  Vint Cerf raised that design consideration as a question.  He also raised the possibility of using variable-length OSI addresses.