Guns and Gun Control: The Numbers Are Beginning To Add Up

Drawing_from_holsterMany people have made the claim that they need to own guns to protect themselves, that they can’t leave it to police to protect them, the enormous assumption being that a gun actually does offer some protection.  There are a number of scholarly works to test that assertion.

  • A longitudinal study by Johns Hopkins and Berkeley published in 2015 the American Journal of Public Health shows that Connecticut’s Permit to Purchase law reduced firearm homicide by 40%.
  • A separate Johns Hopkins study showed that firearm suicide rates in Connecticut dropped 15.4% after that law was passed, while Missouri’s firearm suicide rate increased by 16.1% after they repealed gun control legislation.  There was also a lower than expected overall suicide rate in Connecticut.
  • Missouri also saw a 25% increase in homicides after their background check law was repealed.
  • An earlier CDC study published in 2004 in the Journal of American Epidemiology showed that simply having a gun in the home, regardless of how it is stored, increases the odds of death by firearm by a factor of 1.9.
  • A more recent meta-study by Harvard researchers in the Annals of Internal Medicine showed an increase risk of both suicide and homicide in homes where guns are present.  In particular, that study found that homicide victimization rates were slightly higher for those who had guns in their homes than those who did not.
  • A 2011 CMU study did show that having a gun in the home seems to deter certain planned crimes such as burglary, but has no effect for unplanned crimes.  Furthermore, it showed that only having a gun in the home does not provide the deterrence, but that this fact needs to be somehow brought to the attention of the burglar.

Summing up: studies thus far demonstrate that having a gun in the house increases the chances of someone in that house dying by firearm, it increases the risk of suicide, and it does not prevent a crime of passion, although it may deter a burglary.  More analysis is needed.  It is likely, for instance, that the type of gun matters.  A lot of studies are needed about open carry laws.  Still, if you think a gun offers you any sort of protection against others, consider the risks.

Image courtesy of aliengearholsters.com.

Here’s MUD in your eye! A way to protect Things on the Internet

How can the network protect so many types of things? We need for manufacturers to step up and tell us.

U.S. Army Pvt. Charles Shidler crawls through mudSince 2011 Cisco Systems has been forecasting that there will be at Since least 50 billion devices connected to the Internet by the year 2020.  Those are a lot of Things. but that’s not the number I’m worried about.  Consider this: Apple manages somewhere in the neighborhood of 1 billion active iOS devices on their own, and there are about 1.4 billion Android devices that are also managed, though less well.  Rather, it’s the number of types of things that people should be concerned about.  To begin with,not everyone is going to do such a great job at managing their products out in the field as Apple and Google do.  Moreover, even Apple and Google end support for different versions of their products after some period of time.

I call this the Internet of Threats.  Each and every one of those devices, including the device you are reading this note on right now, probably has a vulnerability that some hacker will exploit.

A good number of the manufacturers of those things will never provide fixes to their customers, and even those that do have very little expectation that the device will ever be updated.  Let’s put it this way: when was the last time you installed new software on your printer?  Probably never.

The convenient thing is that many Things probably only have a small set of uses.  A printer prints and maybe scans, thermostat like a Nest controls the temperature in your house, and a baby monitor monitors babies.  This is the exact opposite of the general purpose computing operating model that your laptop computer has, and we can take advantage of that fact.

If a Thing only has a small number of uses, then it aspirinprobably only communicates on the network in a small number of ways.  The people who know about those small number of ways are most likely the manufacturers of the devices themselves.  If this is the case, then what we need is a way for manufacturers to tell firewalls and other systems what those ways are, and what ways are particularly unsafe for a device.  This isn’t much different from a usage label that you get with medicine.

So what is needed to make all of this work?  Again, conveniently most of the components are already in your network. The first thing we need is a way for devices to tell the network where to get the manufacturer usage description file (or MUD file).  There’s an excellent example of that in your browser right now, called a Universal Resource Locator (URL), like https://www.ofcourseimright.com.  In our case, we need something a bit mroe structured, like https://www.example.com/.well-known/mud/v1/someproduct/version.  How you get that file, however, is exactly the same as how you got to this web page.

Next, we need a way for the Thing to give the URI to the network.  Once again, the technology is pretty much done.  Your device got an IP address today using Dynamic Host Configuration Protocol (DHCP), which provides an introduction between the device and the network.  All we need to do is add one new parameter or option so that the client can simply pass along this MUD URI.  There are even more secure ways of doing that using public key infrastructure (PKI) approaches such as IEEE’s 802.1AR format and 802.1X protocol.  The nice thing about using a manufacturer certificate in 802.1AR is that it is then the manufacturer and not the device itself that is asserting what the device communication patterns are.

Now, thanks to DHCP or IEEE 802.1X, the network can go get the MUD file.  What does that look like?  At the moment, <it> <looks> <like> <a> <bunch> of <XML>.  {“it” , [“may”, “look”, “more”], “like, {“json”}} in the future.  The good news here is that once again, we’re building on a bunch of work that is already complete.  The XML itself is structured using a data model called YANG.  So long as it conveys to the network what sort of protections a device needs, it could be anything, but YANG will do for now.

Finally, the basic enforcement building block is the access control function in a router or access point.  That function says what each device can communicate with, and they’ve been around since the earliest days of the Internet.

And that’s it.  So now if I have printer from HP and they make a MUD file available, they might tell my network that they only want to receive printer communications, and that the printer should only ever try to send certain types of unsolicited messages.  If anyone tries to contact the printer for another use, forget it.  If the printer tries to contact CNN – or more importantly random devices on my network, it’s probably been hacked and it will be blocked.  Google can do the same with a Nest.

We’re talking about this at the IETF and elsewhere.  What do you think?

The Internet of Everything: Everything will communicate with something!

Things will communicate to their manufacturers, and they need to do so to be secure.

A number of security researchers are getting upset by seeing home devices communicate with one another or with random sites in China.  Is this an attack?  Probably not.  But there may be vulnerabilities that can be exploited that should cause consumers pause.

There are two common design patterns.  Today I’m just going to discuss what we call “Calling Home”.  When we use the term, we are not referring to your home, but to a centralized management site.  In the case of Thing manufacturers, the site is likely offered by the manufacturer.

So you just bought that new digital video recorder and it offers a great new feature: you can program it wherever you are.  There are many such devices on the market today, such as a SlingBox.  How do those communications happen?

rendezvous-callhome

 

In the figure above, all your home devices sit behind your home router.  They’re generally allowed to connect to systems outside of your network, but systems outside are not able to connect in.  In part this is a security feature: your firewall will block incoming connections so that the entire world can’t attack you.  In part, however, it’s because the systems in your home are only using locally recognizable IP addresses.  And since your iPhone moves around, your home doesn’t know how to get to it.  Therefore, a rendezvous service is needed.  That’s what that cloud function is performing, and that is what those curved lines indicate.

The SlingBox on the left may not just be connecting for the sake of communicating with your smart phone.  It is probably also doing so for other reasons, such as receiving electronic program guide information.

In the world of IoT, that is a common design pattern.  Devices will need to communicate with their manufacturer web sites for all different reasons, but there is one common and important reason: devices will have bugs.  As manufacturers develop fixes, devices will need to learn of those fixes and install them.  Every modern-day operating system and browser has this feature.  All Things will need these features as well.  In fact, one big concern today is what happens when manufacturers do not offer fixes?  Then those vulnerabilities are out there for anyone to exploit.  This is a big problem in the developing world, where consumers often buy devices on the secondary market, long after manufacturers have intended them to be retired.

Could a device transmit private information to a manufacturer?  Sure.  In fact, Samsung got caught last year through their dreadful privacy policy where their televisions could have been listening and reporting conversations.

Here’s the rub: without extensive analysis, it’s hard to know exactly what is being exchanged between a device and a manufacturer.  Encryption will keep observers from seeing what is being exchanged.  At the same time, a lack of encryption would be as or more risky to consumer privacy.

When devices are able to communicate at all it is possible that they will be compromised.  It’s important to understand that there are risks with each Internet-enabled device.  But it’s also important to consider any benefit the communication will have.  A refrigerator or a heater that knows it is in need of repair can have a manufacturer contact the owner, for instance. That’s worth something to some people.  Judge the risks for yourself.

What should the best practices be in this space and what should consumers expect in products?  More on that over time, but feel free to answer those questions yourself for now.


iPhone image courtesy World Super Cars on Wikipedia.

Court Order to Apple to Unlock San Bernardino iPhone May Unlock Hackers

A judge’s order that Apple cooperate with federal authorities in the San Bernardino bombing investigation may have serious unintended consequences. There are no easy answers. Once more, a broad dialog is required.

Scales of JusticePreviously I opined about how a dialog should occur between policy makers and the technical community over encryption.  The debate has moved on.  Now, the New York Times reports that federal magistrate judge Sheri Pym has ordered Apple to facilitate access to the iPhone of Syed Rizwan Farook, one of the San Bernardino bombers.  The Electronic Frontier Foundation is joining Apple in fight against the order.

The San Bernardino fight raises both technical and policy questions.

Can Apple retrieve data off the phone?

Apparently not.  According to the order, Apple is required to install an operating system that would allow FBI technicians to make as many password attempts as they can without the device delaying them or otherwise deleting any information.  iPhones have the capability of deleting all personal information after a certain number of authentication failures.

You may ask: why doesn’t the judge just order Apple to create an operating system that doesn’t require a password?  According to Apple,  the password used to access the device is itself a key encrypting key (KEK) that is used to gain access to decrypt the key that itself then decrypts stored information.  Thus, bypassing the password check doesn’t get you any of the data.  Thus, the FBI needs the password.

What Apple can do is install a new operating system without the permission of the owner.  There are good reasons for them to have this ability.  For one, it is possible that a previous installation failed or that the copy of the operating system stored on a phone has been corrupted in some way.  If technicians couldn’t install a new version, then the phone itself would become useless.  This actually happened to me, personally, as it happens.

The FBI can’t build such a version of the operating system on their own.  As is best practice, iPhones validate that all operating systems are properly digitally signed by Apple.  Only Apple has the keys necessary to sign imagines.

With a new version of software on the iPhone 5c, FBI technicians would be able to effect a brute force attack, trying all passwords, until they found the right one.  This won’t be effective on later model iPhones because their hardware slows down queries, as detailed in this blog.

Would such a capability amount to malware?

Kevin S. Bankston, director of New Americas Open Technology Institute has claimed that the court is asking Apple to create malware for  the FBI to use on Mr. Farook’s device.  There’s no single clean definition of malware, but a good test as to whether the O/S the FBI is asking for is in fact malware is this: if this special copy of the O/S leaked from the FBI, could “bad guys” (for some value of “bad guys”) also use the software against the “good guys” (for some value of “good guys”)?  Apple has the ability to write into the O/S a check to determine the serial number of the device.  It would not be possible for bad guys to modify that number without invalidating the signature the phone would check before loading.  Thus, by this definition, the software would not amount to malware.  But I wouldn’t call it goodware, either.

Is a back door capability desirable?

Unfortunately, here there are no easy answers, but trade-offs.  On the one hand, one must agree that the FBI’s investigation is impeded by the lack of access to Mr. Farook’s iPhone, and as other articles show, this case is neither the first, nor will it be the last, of its kind.  As a result, agents may not be able to trace leads to other possible co-conspirators.  A  Berkman Center study claims that law enforcement has sufficient access to metadata to determine those links, and there’s some reason to believe that.  When someone sends an email, email servers between the sender and recipient keep a log that a message was sent from one person to another.  A record of phone calls is kept by the phone company.  But does Apple keep a record of FaceTime calls?  Why would they if it meant a constant administrative burden, not to mention additional liability and embarrassment, when (not if) they suffer a breach?  More to the point, having access to the content on the phone provides investigators clues as to what metadata to look for, based on what applications were installed and used on the phone.

If Apple had the capability to access Mr. Farook’s iPhone, the question would then turn to how it would be overseen.  The rules about how companies  handle customer data vary from one jurisdiction to another.  In Europe, the Data Privacy Directive is quite explicit, for instance.  The rules are looser in the United States.  Many are worried that if U.S. authorities have access to data, so will other countries, such as China or Russia.  Those worries are not unfounded: a technical capability knows nothing of politics.  Businesses fear that if they accede to U.S. demands, they must also accede to others if they wish to sell products and services in those countries.  This means that there’s billions of dollars at stake.  Worse, other countries may demand more intrusive mechanisms.  As bad as that is, and it’s very bad, there is worse.

The Scary Part

If governments start ordering Apple to insert or create malware, what other technology will also come under these rules?  It is plain as day that any rules that apply to Apple iPhones would also apply to Android-based cell phones.  But what about other devices, such as  televisions?  How about  Refrigerators?  Cars?  Home security systems?  Baby monitoring devices?  Children’s Toys?  And this is where it gets really scary.  Apple has one of the most competent security organizations in the world.  They probably understand device protection better than most government clandestine agencies.  The same cannot be said for other device manufacturers.  If governments require these other manufacturers to provide back door access to them, it would be tantamount to handing the keys to all our home to criminals.

To limit this sort of damage, there needs to be a broad consensus as to what sorts of devices governments should be able to access, under what circumstances that access should happen, and how that access will be overseen to avert abuse.  This is not an easy conversation.  That’s the conversation Apple CEO Tim Cook is seeking.  I agree.