Will New NY Banking Regulations Actually Tighten Cybersecurity?

Proposed New York banking regulations might not help that much.

New York is proposing new cybersecurity rules that would raise the bar for banks over which they have jurisdiction (wouldn’t that be just about all of them?).  On their face, the new regulations would seem to improve overall bank posture, but digging a bit deeper leads me to conclude that these regulations require a bit of work.

A few key new aspects of the new rules are as follows:

  1. Banks must perform annual risk assessments and penetration tests;
  2. New York’s Department of Financial Services (DFS) must be notified within 72 hours of an incident (there are currently numerous timeframes);
  3. Banks must use 2-factor authentication for employee access; and
  4. All non-public data must be encrypted, both in flight and at rest.

The first item on that list is what Chief Information Security Officers (CISOs) already get paid to do.  Risk assessment is in particular the most important task on this list, because as banks evolve their service offerings, they must ascertain both evolving threats and potential losses.  For example, as banks added iPhone apps, the risk of an iPhone being stolen became relevant, thus impacting app design.

Notification laws exist already in just about all jurisdictions.  The proposed banking regulation does not say what the regulator will do with the information or how it will be safeguarded.  A premature release can harm ongoing investigations.

Most modern banks outside the United States already use two-factor authentication for employee access, and many require two-factor authentication for customer access.

That last one is a big deal.  Encrypting data in flight (e.g., transmissions from one computer to another) protects against eavesdroppers.  At the same time, absent other controls, encryption can obscure data exfiltration (information theft). Banks currently have many tools that rely on certain transmissions being “in the clear”, and it may require some redesign of communication paths to address both the encryption in flight requirement and auditing needs.  Some information is simply impractical today to encrypt in flight.  This includes discovery protocols such as DHCP, name service exchanges (DNS), and certain other network functions.  To encrypt much of this information would require yet lower layer protection such as IEEE 802.1AE (MACSEC) hop-by-hop encryption.  The regulation is, again, vague on precisely what is necessary.  One thing is clear, however: their definition of non-public information is quite broad.

To meet the “data at rest” requirement banks will either have to employ low level disk encryption or higher level object-level encryption.  Low level encryption protects against someone stealing a disk or taking it from the trash and reading it, but provides very little protection against someone breaking into a computer when the disk is still spinning.  Moreover, banks generally have rules about crushing disks before they can leave a data center.  Requiring data at rest to be encrypted in data centers may not provide much risk mitigation.  While missing laptops have repeatedly been a source data breaches, how often has a missing data center disk caused a breach?

Object-level encryption, or the encryption of groups of information elements (think Email messages) can provide strong protection should devices be broken into.  Object-level encryption is particularly interesting because if done right, it can address both data in flight and data at rest.  The challenge with object-level encryption is that the tools for it are quite limited.  While there are some tools such as email message encryption, and while there are various ways one can use existing general purpose mechanisms such as OpenSSL to encrypt objects at rest, on object-level encryption remains a challenge because it must be implemented at the application level across all applications.  Banks may have tens of thousands of applications running at any one time.

This is an instance where the financial industry could be a technology leader.  However, all such development must be grounded in a proper risk assessment.  Otherwise we end up in a situation where banks will have expended enormous amounts of resources without having substantially improved security.

Holiday Shoppers: Don’t Get Phished!

Don’t get phished this holiday season. Here are some common sense reminders.

CybercrimeAs we enter the holiday season, if you order online, fraudsters will be targeting you.  Many people will be easy marks, where their computers will become infected with viruses, and they will be victims of identity theft. Big online vendors such as eBay and Amazon represent big targets, but others will be targets as well.  Phishers will be sending out loads of poisonous messages, just hoping that a few people will mistakenly click on links to malware-laden web sites.  While big mail providers like Google and Yahoo! work hard to filter out such garbage, it’s unavoidable that some of dangerous emails will get through.  Preventing such thefts while shopping online can be tricky because fraudulent and legitimate messages look nearly identical. Fraudsters may know something about you, such as your name, your mother tongue, the region in which you live, and the names of some of your friends.  A competent fraudster will use the logos and have the same look and feel of a legitimate online vendor.

Some of my techie friends are probably snickering, saying “That couldn’t happen to me.”  It probably already has.

Here are a few common sense suggestions to keep you from becoming a victim:

  1. Here’s the obvious one: if you didn’t order something from a vendor, be highly suspicious of the email, especially with messages that claim to have order information or coupon offers.
  2. If you have ordered something, beware any message with a subject that is vague, such as “your order”.  A legitimate online vendor will somehow identify the order, either with an order number or with the name of the product you have ordered.  This may appear in the subject line or in the body of the message.
  3. No legitimate online vendor sends zip files in email.  Don’t open them.  The same largely holds for most other attachments.  If they can’t provide you necessary information in the body of the message, it’s probably not legitimate.
  4. Most online vendors provide you a means to log into their service to track orders.  If you are at all in doubt about whether a message is legitimate, without clicking on a link in the message, visit their web site, and log in to track the order.  If you need help, contact the vendor’s customer service.
  5. While banks may email you alerts of some form, it is still always better to go to their web sites without clicking on links in the messages.
  6. Unless you gave it to them directly shippers such as Federal Express do not have your email address.  No decent online vendor will share your email address with a shipper.

What happens if you do click on something you shouldn’t have?  There is no easy answer.  Unless you are using antivirus, you have to assume the worst.  This means that it’s important to maintain good backups.  That way you can reinstall from scratch.  Sounds painful?  Then don’t carelessly click on email links.

Want some more advice on staying safe?  Check out StaySafeOnline.org.

WCIT, the Internet, the ITU-T, and what comes next

Courtesy of Mike Blanche of Google, the map on the left shows in black countries who signed the treaty developed at WCIT, countries in red who indicated they wouldn’t sign the treaty, and other countries who are thinking about it.  A country can always change its mind.

Over the next few weeks that map will change, and the dust will settle.  The fact that the developed world did not sign the treaty means that the Internet will continue to function relatively unmolested, at least for a time, and certainly between developed countries.   As the places that already heavily regulate telecommunications are the ones who are signing the treaty, its impact will be subtle.  We will continue to see international regulatory challenges to the Internet, perhaps as early as 2014 at the ITU’s next Plenipotentiary conference.  Certainly there will be heated debate at the next World Telecommunication Policy Forum.

This map also highlights that the ITU is the big loser in this debacle.  Secretary General Hamadoun Touré claimed that the ITU works by consensus.  It’s just not so, when matters are contentious, and quite frankly he lacked the power and influence to bring all the different viewpoints together.  This loss of consensus has split the Union, and has substantial ramifications.  There is no shared vision or goal, and this will need to be addressed at the next Plenipotentiary conference.

With different sectors and diverse participants, it is hard to lump the Union into a single group. Nations come together to manage radio spectrum in the ITU-R.  That’s important because spectrum crosses borders, and needs to be managed.  In the ITU-D, both developing and developed countries come together to have a dialog on key issues such as cybersecurity and interoperability.  The work of the -D sector needs to be increased.  Most notably, their Programmes need even more capability, and the study groups should be articulating more clearly the challenges and opportunities developing countries face.

The -T standardization sector is considerably more complex.  It’s important not to lose sight of the good work that goes on there. For example, many of the audio and video codecs we use are standardized in ITU-T study group 16.  Fiber transmission standards in study group 15 are the basis for long haul transmission.  Study group 12 has some of the foremost experts in the world on quality of service management.  However, the last six years have demonstrated a fundamental problem:

At the end of the day, when conflicts arise, and that is in the nature of standards work, because of one country one vote, the ITU-T is catering to developing countries who by their nature are not at the leading edge of technology.  The ITU-T likes to believe it holds a special place among standards organizations, and yet there have been entire study groups whose work have been ignored by the market and governments alike.  To cater to those who are behind the Rogers adoption curve is to chase away those who are truly in front.  This is why you don’t see active participation from Facebook, Twitter, or Google in ITU-T standards, and why even larger companies like Cisco, IBM, HP, and others prefer to do protocol work largely elsewhere.1

So what can be done?

In practice study groups in ITU-T serve four functions:

  • develop technical standards, known as recommendations;
  • provide fora for vertical standards coordination;
  • direct management of a certain set of resources, such as the telephone number space;
  • establish accounting rates and regulatory rules based on economics and policy discussions.

The first two functions are technical.  The other are political.  The invasion of political processes into technical standards development is also a fundamental issue.  I offer the above division to demonstrate a possible way forward to be considered.  The role of the -D sector should be considered in all of this.  Hearing from developing countries about the problems they are facing continues to be important.

The ITU-T and its member states will have the opportunity to consider this problem over the next two years, prior to its plenipotentiary meeting.  There is a need for member states to first recognize the problem, and to address it in a forthright manner.

What Does the Internet Technical Community Need to Do?

For the most part, we’re at this point because the Internet Technical Community has done just what it needed to do.  After all, nobody would care about regulating a technology that is not widely deployed.  For the most part, the Internet Technical Community should keep doing what we’re doing.  That does not mean there isn’t room for improvement.

Developing countries have real problems that need to be addressed. It takes resources and wealth to address cybersecurity, for example. To deny this is to feed into a political firestorm.  Therefore continued engagement and understanding are necessary.  Neither can be found at a political conference like WCIT.  WCIT has also shown that by the time people show up at such places, their opinions are formed.

Finally, we must recognize an uncomfortable truth with IPv4.  While Africa, Latin & South America still have free access to IPv4 address space, the rest of the world has exhausted its supply.  Whenever a scarce resource is given a price, there will be haves and have nots.  When the have nots are poor, and they often are, it can always be classed as an inequity.  In this case, there truly is no need for such inequity, because IPv6 offers everyone ample address space.  Clearly governments are concerned about this.  The private sector had better regulate itself before it gets (further) regulated.

Another Uncomfortable Truth

Developing countries are also at risk in this process, and perhaps most of all.  They have been sold the idea that somehow “bridging the standardization gap” is a good thing.   It is one thing to participate and learn.  It is another to impede leading edge progress through bloc votes.  Leading edge work will continue, just elsewhere, as it has.

1 Cisco is my employer, but my views may not be their views (although that does happen sometimes).

Android Phones the next security threat?

Take it as an axiom that older software is less secure.  It’s not always true, but if the code wasn’t mature at the time of its release- meaning it hasn’t been fielded for years upon years- it’s certain to be true.  In an article in PC Magazine, Sara Yin finds that only 0.4% of Android users have up to date software, as compared to the iPhone where 90% of users have their phones up to date.

This represents a serious threat to cybersecurity, and it should have been a lesson that was already learned.  Friend and researcher Stefan Frei has already examined in great detail update rates for browsers, a primary vessel for attacks.  The irony here is that the winning model he exposed was that of Google’s Chrome.

What then was the failure with Android?  According to the PC Magazine article, the logic lies with who is responsible for updating software.  Apple take sole responsibility for the iPhone’s software.  There are a few parameters that the service provider can set, but other than that they’re hands off.  Google, however, provides the software to mobile providers, and it is those mobile providers who must then update the phone.  Guess which model is more secure?  Having SPs in the loop makes the Internet more insecure.  Google needs to reconsider their distribution model.