How safe is your luxury car? – An automotive IT security analysis

22 06 2012

You have a new luxury car, which is the talk of your street. Your car boasts of features like new age infotainment, ABS, Heat controlled seats, Electronic steering controls, theft deterrent system, obviously the airbags and what not. You feel yourself at heaven whenever you are inside the car. After the first free service, you are on top, used to he car, suddenly when driving back home, probably in mid 60’s speed, one of your wheel brakes automatically without your command and the engine stops, what would happen to you? Even if you try to sue the car company, will you recover your loss (more like a permanently damaged joint for example)?

This is what the attacker can do to your car. The attackers won’t hack a system always for money. It could be for fame or to just carry on a sociopathic habit. I do not want to believe the next sentence. “It is really possible and highly probable”. It is not due to the technical build quality of the car. It is due to the “hacking” of the electronic parts (electronic control units – ECUs) that causes the totally shocking incident.

By 2010, the number of ECUs in a high end luxury car was around seventy and contributed to a huge chunk (around 40%) of the manufacturing of the whole unit. All the boasting features that I mentioned earlier are provided by these ECUs. ECUs are the computers that cater to a specific functionality by implementing tens of thousands of lines of code of software. These units are generally interconnected by some variation of the most popular vehicle network – the Controller Area Network (CAN). All ECUs are diagnosed by a federally mandated On Board Diagnostics (OBD) protocol. This is carried out through a mandatory OBD-2 port present generally just under the dash. The service personnel are provided with the test tools to connect to the port. This is the door for an attacker. An attacker can just fix a component for some time or permanently to gain control over the ECUs. ECU states can be modified by strong functions called Device control functions which are built on CAN. If the attacker device can record the device control patterns and the communication patterns between the ECUs, the evil has the “Genie”.

The companies permit select service stations (only) to upgrade the firmware of the ECU outside of the company after a simple seed-key access control mechanism, similar to the “what is your mother’s maiden name?”  kind of authentication. The problem is, these values of seed and the key, are both sixteen bits length and annoyingly permanent. It is easy for a hacking enthusiast to crack the access codes. None of the companies have employed encrypted or digitally signed software till now. This is the point where the attacker can inject malicious code to corrupt the system and gain control over your car. Technically, the car company’s confidential software, communication protocols, and data are stolen; along with a possible injection of “virus” into the whole electronic system. The computer system is thus hacked!!!

                  There are different techniques of attacking:

Through messages: The CAN messages are “sniffed” for a long period of time to get the communication pattern of the ECUs. The attacker can just insert the malicious “bits” into the communication lines with the known patterns.

Through device control: Device control functions are mainly used by the manufacturers to change the ECU variables/states. For example, a device control function is used to write the odometer value on to the instrument panel cluster display. If the attacker is able to get hold of the function, the service personnel could be fooled by writing a small data always before servicing, which is a financial loss to the car manufacturer.

Reverse Engineering: The ECU memory contents are hacked. Then the binary is reverse engineered to get the code and functionality of the ECU software.

Definitely there are a number of ways to counter the attack. Some of them are to be followed by the manufacturer and some by users, few are

  1.  Instead of using a fixed length – permanent seed-key access control, use a mechanism which changes the access control codes at each request for the access. This will shut down many vulnerable doors
  2. Encrypt the CAN messages, which are so predictable due to its wide use; these messages are so decodable that even without the access codes, ECUs can be tampered by sending tampered CAN messages. The encryption denies the attacker to read the pattern of communication between ECUs.
  3. Device control functions must be digitally signed by the manufacturer. These functions are too powerful and only the authenticated functions must be used
  4. Always use digitally signed, encrypted software. This will prevent hackers from reverse engineering the ECU software
  5. The most important thing, educate the users. Users should be aware of the potential threat they are buying. After all, the car is life critical system. Users should be fully trained first hand (not by just providing a user manual) about the OBD-2 port and the possible ways of attacking

Let us hope that the car companies using the software to control the features consider the security of ECUs as a high priority in the immediate future and further and the potential buyers know what they are buying from the inside and well equipped to fight the ECU security attacks.


Book – Security in computing by Charles p. Pfleeger, 2nd edition


Security Offense/Defense using Semantic Web Agents

21 06 2012

We have had spies helping secure countries and armies alike for a very long time. With the advent of computing technologies, spying has advanced, with all kinds of advanced tools to do what the spies (humans) want to do, yet the idea of a “Virtual Spy” to protect cyberspace has not even hit Hollywood yet. I am not talking about Robots and Artificial Intelligence here, just a simple idea based on the fact that if we (human) spies can spy, it can be automated as well.  I am proposing the idea of a “Virtual Spy”, in hopes that a simple idea like that when backed up with the RIGHT APPLICATION OF technologies can actually produce something significant in the area of Information Security. The Virtual Spy is a computer program that “understands” the computing environment it is in, “Trusts” some information sources, “Senses” when an attack is staged, and “Reacts” to the attack. The Virtual Spy has been trained to or has learnt to keep the computing environment secure and deal with attacks. The Virtual Spy “Understands” what “Confidentiality”, “Integrity” and “Availability” mean in the specific environment, as the computer program possesses the “knowledge” required to be the Chief Information Security Officer’s “right-hand” virtual employee.

There are 3 fundamental areas that need to be understood and applied in conjunction with each other, to make a case for a virtual spy and there are conventional wisdom based paradigms that need to be broke in these areas to be able to apply in the information security arena;

  1. Application Mining & Infrastructure Discovery
  2. Feedback control based Self adaptive systems
  3. Semantic Web Agents & Inference

Suggested readings to get a high level overview of these areas, future research directions and their intended applications, are highly recommended to be read to embrace the terminologies used through the rest of the post here.

  1. Strategies to Cut Application Costs and Increase Productivity, Using Application Mining Tools by Phil Murphy, Forrester Research Group, April 24, 2009 | Updated: July 9, 2009
  2. HP UCMBD Infrastructure discovery tool
  3. Software Architecture-based Adaptation for Pervasive Systems by David Garlan, Shang-Wen Cheng, Bradley Schmerl, João Pedro Sousa, Bridget Spitznagel, Peter Steenkiste, Ningning Hu, School of Computer Science, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh PA 15213
  4. IEEE Intelligent Systems Journal (march/April 2001), Agents and the Semantic Web by James Hendler, Department of Computer Science, University of Maryland College Park, MD 20742
  5. “Ultra Large Scale Systems”
  6. State of Information Security

What problem is the “Virtual Spy” trying to solve?

Neustar forensic logs from 168 of the largest 500 U.S. companies by revenue and found evidence in that 162 (reference #6) of them owned machines that at some point had been transmitting data out to hackers. Frustration in the security community in not being able to defend against these attacks and “securing the perimeter” and having “larger locks” or having “heavier doors” have helped only to some extent. A solution to this problem has been eluding people, as with any of the above strategies, once implemented, there is a smarter hacker around the block that cracks it and makes us seeking the next big perimeter security. But, the “Virtual Spy” (v-spy) is watching and roaming within the premises of the corporate or the government (enterprise), looking for security breaches, as they happen. The “Virtual Spy” is not another “Security” COTS product that every corporate needs to buy, it’s an employee an enterprise needs to develop from within. This employee will train and understand the assets within the “Boundary” of the enterprise. Direction from the CISO on trusted sources and “High Value Assets” and response measures are provided to this new recruit. While the new kid on the block is not planning on replacing any of the security controls in place, it is planning to “Up” the game significantly.

How does the new recruit learn about the enterprise it is working for?

Put in simple terms, the v-spy needs to “Know what is in the enterprise” first, “Understand” what problems are, in the context of the enterprise, and “Address” the problems and keep repeating it till its employment contract is valid. Information security is not merely only about perimeter defense, security permeates every realm of the computing domain of the enterprise, and I merely state that to point out the relevance of the following discussion to Information Security. It’s the conventional wisdom barrier that I am trying to break here – “Information Security Personnel – please try to stick to your lanes, in your area”. Although these subjects are widely discussed for benefits other than security, my intent is purely to discuss these subjects in the context of information security.

 “Knowing” – Securing Assets starts with knowing what they are

  • Application Mining – reverse engineering information about an application, looking at patterns in the outputs from applications, while the Forrester research (reference #1) talks about mostly static aspects of the application, our research would point towards dynamic characteristics of applications

Sample information from this exercise:

Users in the enterprise use SysA to open a database connection with DB1

Users of SysB query all rows from Employees Table in DB2

  • Infrastructure Discovery – A Tool like HP UCMDB can “snoop” an enterprise’s entire network and assemble information about every single Asset in the computing environment, which will take v-spy giant steps forward, now it is sitting on a “gold mine” of information about the enterprise

Sample information from this exercise:

SysA is hosted in Server123 located in Phoenix, AZ USA

DB1 is hosted in Server456 located in Prague, Czech Republic, Europe

  • Ontologies of Applications and Infrastructure for the enterprise (not for the worldwide web) – and then data from Application mining and Infrastructure discovery are converted to RDF, and RDF-schemas generated

Sample “Triples” from this exercise

DB1 stores Employee Data

HR Personnel manage Employees

Sample ontology from this exercise

Ample proof as to how this information can be of immense use to Information Security lies in the diagram – brings context to the Information Security Personnel and even better – this is all machines readable!

  • Mapping the developed Ontologies to the enterprise functions Ontologies; see example below to help understand what this would look like in an enterprise

 “Understanding” – do we really know what the assets in the enterprise do?

  • Functional services (HR, Design, Engineering etc…) to leverage the enterprise’s machine readable Ontologies and enable “Machine-to-Machine” interactions are developed to bring the context into the transactions that are happening in the enterprise and this is the Game Changer, as it relates to Information Security
  • Identify Agents for each functional area of the enterprise – “Make Payment to Vendor”, “Make Payment to Employee”, “Terminate a Contract”, “Purchase a Part”, “Ship a part”, “  Process Expense Reports” are some examples, that are built around these services can execute transactions like “Shut down all financial transactions – Now” or “Shut down network access to all the High-Value-Assets”
  • These Agents now “moderate” all the transactions happening in the enterprise (think of this idea as the same concept of wrapper classes, at no point in time will any reasonable security architect will recommend re-doing all the applications just for security. Although the benefits of the usage of agents out-weigh the costs incurred, still would not recommend re-writing “ALL” the applications, just “wrap” the transactions around in the higher level semantic languages, enabling us to gain context of the transactions happening in the environment (not to be confused with “Logging” every single activity, Agents basically understand what that transaction means in context, as they can read and understand the ontologies in the ontology repository)
  • The v-spy could be one or multiple of these agents specialized in purpose to defend the enterprise from attacks and they are “aware” of all the other agents in the enterprise and the other agents, when they see a transaction that they don’t believe is in context, will summon the v-spy representative agent closest to them or all of them.

“Addressing” – Confidentiality, Integrity and Availability

  • Establishing Inter-agent communications – the mapping of the ontologies and the self-advertising services in the environment should make this easily possible (well not very easily at the time I am writing this). Since the agents themselves are moderating the actions the assets are playing, the agents are looking for end results and not just anomalies in network activity.

Sample scenario using the Application and Infrastructure Ontology above:

One or many Agents in the HR domain, will be moderating transactions to DB1 and SysA, let’s say as an example, if the Domain Network Controller is sending many requests through to connect to the Server123, and it has created an event, and no other agents are reporting any major spike in transactions, something is up and this can be facilitated by Agent-to-Agent communications across multiple domains, within the enterprise.

  • Inference engines are like the “Command & Control” rooms that the agents can talk to, send a report of an event, and get an inferred result back from the engine. The information context is the same for all these specialized programs – the Ontologies. I propose using a special “Security Events Inference Engine” that understands the enterprise’s security risks and can instruct responses to the agents (“The Matrix” – in the movie The Matrix)
  • The “Virtual Spy” Agent responds to events through the self-adaptive systems (the same idea as that of an Antibiotic medicine). Inference engine responds to event inferring requests with response events defined in the Ontology Repository
  • Architecture Description Languages are being increasingly used to define system architectures, and when the key systems have some components or sub-systems built into them to be able to responds to its own outputs, based on the self-healing systems proposed by Dr Garlan et al. [Reference #4]. I propose the extension of the use of the self-healing or adaptive systems to extend to being able to respond to security events.
  • These responses are defined as services that the agents are “aware” of and are defined in the Ontology repository
  • Once the Inference engines response is received by an agent, the agent should be able to initiate response actions based on policies defined in the Ontology repository and could possibly do many of the following:
  1. Preventive responses by shutting down Domain Network Controllers momentarily
  2. Staging an attack back to the attacker, based on the information available about the attacker, present in the enterprise now, left behind by the attacker with the agents in the enterprise, who can communicate with each other to “track” the attacker down


This proposal is neither meant to be a technical tell-all nor is any proof that I have personally tried these concepts out in a lab and proved them prior. It is my attempt to apply the concepts in our modern day computing field to practical problems that can benefit from these unique applications. Even more important, is the idea of challenging conventional wisdom (I like to refer to it as today’s technology superstitions) as far as applying technologies go. For the most part, the way our minds work has not significantly changed over centuries; there will always be bad guys who try to steal something from others. There is no such a thing as an impenetrable fort, if one human can get to the other side, there will be another one. I do realize that there is a lot of catch-up to do to get to the end state, but this end state is desirable for everyone in Information Technology and may be this time, the Information Security Personnel will drive this change in the ever changing and evolving field.

What is lacking in our computing environment is context, which I believe the proposal above brings in and I will argue that the biggest benefactors of context are Information Security Personnel, as illustrated above. As my friend from the southern part of India, far away from today’s technologies toiling in the farms, asked me, “So, if I tried to take some of those Car Designs you showed on your computer to me today, will your computer catch me?”

The Sum of Your Parts: Business Analytics and Unique Identifiers

20 06 2012

We haven’t been properly introduced.  I’m that guy from your Intro to Information Security course, but you probably know me better as:

  • ·00:10:FA:49:6B:E9
  • ·
  • ·SSIDs Starbucks & Caribou
  • ·Mozilla/5.0 (Linux; U; Android 2.3.3; en-au; GT-I9100 Build/GINGERBREAD) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.12011-10-16 20:22:55
  • ·Cookies: text-based, encoded, unsweetened
  • ·Target Guest ID #34980292

I Really Want to Sell You Stuff

If you have found this blog post, chances are that you may be technology oriented and are likely quite knowledgeable on the modern information security threat landscape.  But are you paranoid?  Would you give your name or home address to a complete stranger?  Would it bother you if someone knew your every move?  Would you share details of the technologies that you rely upon with strangers flexing hacker skillz?  Are you confident in your ability to maintain your anonymity?

Aristotle put forth that “the whole is more than the sum of its parts”.  Sure, this sounds noble and speaks to the humanity of man- but is it still true?  What if the sum of your parts is really the sum of every commercial transaction conducted over the course of your life?  BBC News estimates that value to be $1.94M [1].  It is safe to suggest that their are entities in the world that place more value on that dollar amount than on the whole of you.  Business analytics are scary.  ComputerWorld and SAS would have you believe that “business analytics are predictive as well as historical, which requires a cultural shift to the acceptance of a proactive, fact-based decision-making environment, providing organizations with new insights and better answers faster” [2].  In other words, its the science of selling to consumers by learning everything they can about them, fusing that knowledge with behavior analysis and wrapping it up in a custom marketing campaign tailor made for you.

Who Exactly are “You” and How Do Businesses Get This Data?

You are the sum of your parts.  In the Information Age your parts are the menagerie of unique digital IDs that you litter throughout the real and virtual worlds.  Digital devices have MAC addresses, IP addresses, user-agents, usernames, cookies and countless others.  From each of these IDs business are able to glean valuable pieces of information.  It may be something that you are, something that you like or even where you were when the information was collected.  Each of these adds context to “you” and once correlated in a database will ultimately paint the real “you” in fine detail.

First, lets look at that phone in your pocket.  I bet you love your new iPhone.  It really is hard to remember life before mobile YouTube hilarity.  However, those data plans are really expensive.  Lucky for you that you outwitted the phone company and configured your phone to milk free wifi wherever you go- good for you.  Wrong.  In an effort to ease the process of attaching to networks, your phone constantly throws SSID Probe Requests out into the world [3].  These Probe Requests reference every network that you have ever attached to- looking for old friends.  This means that 00:10:FA:49:6B:E9, aka the unique MAC address of that guy’s phone, is throwing out Probe Requests for the “Starbucks” and “Caribou” SSIDs that it connected to last week.  Now, anyone within earshot listening in the 2.4Ghz 802.11 spectrum has learned that 00:10:FA:49:6B:E9 is in the area and has been to Starbucks and Caribou.  Analysis on this data may or may not support that 00:10:FA:49:6B:E9 likes coffee.

Lucky for 00:10:FA:49:6B:E9, it found a seat in a Starbucks and decided to order that Father’s Day gift it has been procrastinating.  Who doesn’t love Amazon and free shipping with an Amazon Prime membership?  This time, however, you aren’t known as 00:10:FA:49:6B:E9- in the case of Amazon it knows you as, the first public IP address between the webserver and you.  In this case, you are known as the ingress IP address into the particular Starbucks where you are sipping your Grande Americano.

Coupled with your IP address Amazon also knows you as your user-agent.  User-agents are a string of data that tells a webserver what tools it has to render the information to the user.  So a developer may have a different website that it wants to send to Internet Explorer as opposed to Firefox.  So in addition to you are also sending Amazon:

Mozilla/5.0 (Linux; U; Android 2.3.3; en-au; GT-I9100 Build/GINGERBREAD)             AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.12011-10-16    20:22:55

Maybe you knew your browser coughed up that much data about itself, or maybe not.   Either way, Amazon now knows that you are on a mobile device running Safari as your browser while you sit in Starbucks.  Analysis on this data may or may not support that likes coffee.

So now as far as Amazon is concerned you are just some random public IP address and web browser combo.  Wrong.  You are the entity that looked at that weird spatula/tong grilling tool last week for your Dad but never completed the transaction.  Your cookies betrayed you.  No, not the delicious chocolate chips in the Starbucks display case- it’s those text-based cookies in your web browser, laden with items that you dropped in your online shopping cart, some tracking data and any unique identifiers like your username.  Cookies started out as helper files used to customize the user experience between the web browser and the server, but over time companies started “surreptitiously planting their cookies and then retrieving them in such a way that allows them to build detailed profiles of your interests, spending habits, and lifestyle” [4].

Now that you understand that it isn’t a coincidence that the rug you almost bought on didn’t just happen to be advertised on the next seven websites you visited, lets take a look at how far companies are willing to go to monetize the relationships that they develop with each customer.  Fancy Wal-Mart, otherwise known as Target, uses the Guest ID number as the ordinal in each shopper-merchant transaction.  This means that your rewards card, credit card numbers and any other unique data is linked back to the Guest ID.  Over time, when combined with state of the art predictive analysis, Target is able to focus its personal marketing campaign at a guest with frightening accuracy.  As reported in in the New York Times, a key goal is to identify and aggressively market expectant mothers as they enter the second trimester- just prior to nesting and the onslaught of baby related purchases [5].

Modern behavioral research is focused on the minutiae of who you are- specifically, that you are the sum of your habits.  According to Duke University, your habits account for 45% of the decisions that you make each day- not conscious thought [5].  So let’s put the pieces together.  Businesses have cracked the code on collecting data that you both intentionally and inadvertently have shared with them.  They know that 45% of your decisions are based on habit and that you spend about $1.94M throughout your lifetime.  They know enough about you to predict your habits, travels, wants and needs.  I can’t say with a straight face that companies have lists of your MAC addresses or that they mine your Probe Requests- yet.  But its not a reach to see that if they were there could be value gleaned from the data.  My goal was to facilitate a small discussion on what kind of data you litter throughout your day and for what reasons businesses may be interested in collecting and analyzing it.  So I will close with two questions:  how much are the sum of your parts worth, and are you paranoid?


[1]  BBC News.  (2005, April 26). People ‘spend £1.5m’ during lives.  Retrieved June 15, 2012, from BBC News:

[2]  MarketWave.  ComputerWorld & SAS. (2009). Defining Business Analytics and Its Impact On Organizational Decision-Making.  Retrieved June 15, 2012, from:…/sas_defining_business_analytics_wp.pdf

[3]  Wuergler, Mark.  (2012, March 5).  Secrets in Your Pocket.  Retrieved June 15, 2012, from Prezi:

[4]  Cookie Central.  The Cookie Concept.  Retrieved June 15, 2012, from Cookie Central:

[5]  Duhigg, Charles.  (2012, Feb 16).  How Companies Learn Your Secrets.  Retrieved June 15, 2012 from:

Administrative controls are as important as any other type of control.

19 06 2012

In our days in which the digital information is being transmitted and processed by tons and is one of the most important assets for every company, the information security has developed a huge variety of control systems that try to protect the information.

I believe that the most complex system without policies, guidelines and training will not avoid information leaks. Those last administrative controls are as much as important as any other control device implemented to protect the information assets.

The principle of easiest penetration according to Charles and Shari Pfleeger on Security in computing, states that any system is most vulnerable at its weakest point1. I agree with this principle, and I also believe that one of the weakest points of every system is people.

There has been several stories in which the people that are working for a company, makes an error intentionally or unintentionally and filters information to another organization that makes it public and causes losses to the business or reputation of that company. Some other cases workers that terminate the relationship with a company also filter information. There was an example that came to my mind because it was a trend topic in the news some days ago, below the story:

Currently Mexico lives elections times. We will be choosing a new president. We are in the time prior to the elections where the presidential candidates present to the people their proposals for the new government, but in this time there are also notes on the communication media that do not talk good things about them, trying to change the mind of the voters. This is the case of one of the candidates that has been blamed to have contracts with Televisa (one of the most important TV companies in Mexico) to fabricate a good image to him by a favorable coverage. If this is true, this is of course illegal, because the contracts happened when the candidate acts as Governor. Few days ago, The Guardian2 made public a note in which reveals that US diplomatic were concerned about the supposed relationship between Televisa – Peña Nieto (Candidate) during 2009. According to the Guardian3, they have access to documents in which US diplomats were reporting the fact that the now presidential candidate was paying for favorable TV coverage while he was governor of Mexico state during 2009.   The Guardian according to the note posted on their website, had access to outlines of fees, detailed media strategy, and payments arrangements. The most important part is that according to that note, the Guardian had access to those documents from a source that worked with Televisa, in form of excel spreadsheet files and power point files.

I bring this example to support my point about that the people is one of the most important weakness of a security system. Therefore it is very important to implement Administrative controls like Policies, Standards, Procedures and Guidelines. All the users of the information have the responsibility to understand the importance of information security and their role in protecting the company assets.

You can design a very huge and complex security system, but at the end, the people generate and modify information and have access or privileges to read or modify the information. Therefore it is important to invest in training, to make aware the people about the importance of the information and the damages they can cause if the information is filtered. The engagement of the people to meet the policy as well as their integrity will be one of their best controls against information stolen situations.


1. Pfleeger, Charles P, and Shari L. Pfleeger. Security in Computing, 4th Edition. Upper Saddle River, NJ: Prentice Hall, 2008. Kindle Edition



Survival of the fittest: When it comes to protecting its information, can an organization learn from nature?

14 06 2012

At first glance, this may seem like a very provocative question.  Perhaps a question better suited for waxing philosophical at your local coffee house, as opposed to the boardrooms of Fortune 500 companies or the United States Government.  However, recent interdisciplinary studies have shown that complex organizations can learn a thing or two from even the simplest creatures.

A recent book by Dr. Raphael Sagarin and Terrence Taylor, Natural Security: A Darwinian Approach to a Dangerous World posits precisely that, with outside the norm thinking and execution, organizations can benefit from the survival skills that species’ have adapted over time.[1]

An argument central to the authors’ theme is that risk cannot be eliminated; rather, risk can only be minimized.[2]  One example is how the United States Transportation Security Administration (TSA) could adapt behavior from marmots, when it comes to recognizing and communicating potential threats.  To paraphrase the author:

“The Government Accountability Office confirmed what most Americans already suspected: the (TSA) cannot possibly control all potential threats to airport security.  Biological organisms inherently understand this. They realize they can’t eliminate all risk in their environment. They have to identify and respond to only the most serious threats, or they end up wasting their resources and, ultimately, failing the evolutionary game.  These models suggests that the TSA would be more effective by being much more selective in whom it considers for screening, rather than trying to eliminate all risks posed by liquids.  A biological assessment of the TSA’s methods also found that the agency’s well-advertised screening procedures may lead to a kind of natural adaption by terrorists.  A study of animal behavior suggests that advertising your security procedures and continually conveying to others that there is a state of elevated threat only helps inform potential terrorists of loopholes in the procedures, while keeping the general population uncertain and nervous.  Species such as marmots, which continually emit warning calls to each other even when no immediate threat is present, force the other animals in their group to waste time and energy trying to figure out if the implied threat is real, he noted.”[3]

So while this example may point out that a reliance on individual, natural instincts may aid in recognizing threats, how this methodology translates to complex organizations would most certainly require a careful, yet flexible, approach to implement.

A potential approach, also described in an article by Sagarin, takes natural observations such as forming good relationships, and continuous adaptation, and applies them as a framework against the context of protecting against terrorist attacks.[4]  But, can these observations be applied to protecting a corporations’ information?

As a person who has worked in the Information Technology field for upwards of 15 years, with all of my experience in the process heavy and bureaucratic waters of the automotive manufacturing industry, my initial response would have been a succinct, no, not a chance.  However, after digging into the articles and research, I have to say the idea is thought provoking at the very least.  Of course, any change would certainly not be easy.  A reflections on my own experiences of the day-to-day operations over the years brings to mind politically charged and contentious relationships, as well as slow adaptation or reactionary response mechanisms.  When I think of IT security, a favorite Dilbert cartoon comes to mind:


While this cartoon represents (hopefully) an extreme, albeit comical, example of how organizations may react to security vulnerabilities, what may be more telling is the last caption –the problem/risk has been fixed.  And that may be the fundamental trap; that risks can be eliminated (fixed) rather than putting the focus on minimizing the risks.

Taking the above mentioned framework, with the baseline foundation of minimizing risk, let’s analyze how it may apply to information security in a major corporation.

Forming Good Relationships

The idea of forming good relationships can be applied in different aspects to help secure information.    When I think about the information security teams I have dealt with in the past; a precise hierarchical structure, typically managed by the Information Technology organization comes to mind.  This may seem plainly obvious to most, since the proliferation of computing across most businesses has intensified the need to secure a corporations’ data – now scattered across a wide array platforms ranging from data centers to mobile devices.  But does this function have to be a pure IT responsibility?

While an IT Security team is typically staffed by knowledgeable and competent people that are trained to deal with computing vulnerabilities and threats – they certainly can’t be the only people in the organization fit to protect the companies’ information assets.

Someone in the research and development department may have insight into upcoming product lines that should be evaluated for possible threats or a person in the marketing department may have a sales campaign idea that could expose vital corporate information.  Each person may bring a unique perspective that can be shared across the organization with respect to protecting information.  And this information sharing should be viewed as a complementary function, rather than a hindrance to progress – speedy delivery to market should be balanced against minimizing risk.

So instead of isolating the dialogue and analysis solely within the IT security team, opening up the links of communication may broaden the overall organizational understanding of identifying and minimizing risks.

Continuous Adaptation

Another area of consideration is the premise of continuously adapting over time.  Many organizations do a lot of work to implement standalone security processes or mix security into product development cycles – access control audits, security checklists, etc.  Lots of effort is put forth and policies are driven down by leadership with strict expectation for adherence.  This is good work and is certainly not to be discounted.  However, sometimes the creation of these policies can be viewed as a discrete deliverable – meaning once the policy is in place, the effort is focused on execution, rather than constantly evaluating whether the policy remains relevant.

Perhaps the constant evaluation of policies should weigh just as important as the initial creation of those policies?  Of course, not only does this take a shift in thinking, but it also has a cost in terms of effort and resources.  Additionally, when an attack occurs it is appropriate to respond to the attack and make sure the vulnerability is patched.

Making the shift from reactively responding to attacks to proactively identifying risks and vulnerabilities should be the ideal state.  And although this effort may be difficult to measure against traditional metrics, such as ROI, the value of this living activity should not be diminished.


So while some techniques may not work for all – I’m not suggesting we all pick up the latest journal on marmot behavior – there are clearly some things that can be learned (as in the TSA example).  However, the fundamental message is that we must learn to live with risk, and focus our efforts on minimizing risk, as opposed to chasing the belief that risk can be eliminated.  And the effort to minimize risk may be achieved if organizations are willing to tap into its diverse pool of resources in an effort to continuous identify and adapt to threats and vulnerabilities.

[1] Sagarin PhD, R., & Taylor, T. (2008). Natural Security: A Darwinian Approach to a Dangerous World. University of California Press.

[2] GoogleTechTalks. (2008, May 29). GoogleTechTalks: Natural Security (A Darwinian Approach to a Dangerous World). Retrieved June 9, 2012, from YouTube:

[3] Duke University. (2008, January 28). Lessons From Evolution Applied To National Security And Other Threats. Retrieved June 9, 2012, from ScienceDaily:

[4] Sagarin PhD, R. (2003). Adapt or Die. Retrieved June 9, 2012, from UCLA:

[5] Adams, S. (2004, January 11). Dilbert. Retrieved June 9, 2012, from Dilbert:

The Perils of a Virtual Data Center

13 06 2012

There is little debate that virtualization technology has been one of the key drivers for innovation in the enterprise data center. By abstracting services from hardware into manageable containers, virtualization technology has forced IT engineers to think outside of the box, allowing them to design systems that actually increase service availability while decreasing the physical footprint and cost. And these innovations haven’t stopped with advanced hypervisors or hardware chipset integrations. Innovations in the virtualization space have forced the whole of the IT industry into a new paradigm, bringing along a cascade of innovations in networking, storage, configuration management and application deployment. In short, virtualization ushered in a whole new ecosystem into the data center.

The only problem is that the security folks seem to have been left out of this ecosystem. According to the Gartner Group, 60% of all virtualized servers will be less secure than the physical servers they’ve replaced.[i] The reason, Gartner claims, is not that virtualization technologies are “inherently insecure”, but instead, virtualization is being deployed insecurely. Gartner goes on to enumerate risks in virtual infrastructure that boil down to a lack of distinction between the management and data planes and a dearth of tools to correctly monitor the virtual infrastructure. This observation, when extrapolated to include this new ecosystem of data center technologies layered above or below the hypervisor, illustrates exactly why the security community needs to become more involved in the actual process of building and designing data center architectures.

Flexible Topologies vs. Stringent Security

Security and network engineers have long relied on strictly controlled, hierarchical data center topologies where systems are architected to behave in a static fashion. In these environments, data flows can be understood by “following the wires” to and from nodes. Each of these nodes fulfils discrete functions that have well-documented methods for securing data and ensuring high availability. But once virtualization has been introduced into this environment, the structure and the assurances can disappear. Virtual web servers can in one instant intermingle with their backend database servers on a single hypervisor, and data flows between these virtual machines no longer have to pass through firewalls. In another instant, the same database servers could travel across the data center, causing massive amounts of east-west traffic, thereby disrupting the heuristics on the intrusion prevention and performance monitoring platforms.

Adding further confusion to these new topologies, new “fabric” technologies have been released that bypass the rigidly hierarchical Spanning Tree Protocol (STP).  STP ensured that data from one node to another followed a specifically defined path through the network core, which made monitoring traffic for troubleshooting or intrusion prevention simple. These fabrics (such as Juniper QFabric, Brocade VCS, and Cisco FabricPath) now allow top of rack switches to be configured in a full mesh, Clos or hypercube topology[ii] with all paths available for data transmission and return. This means that if an engineer wants to monitor traffic to or from a particular host, they will have to determine the current path of the traffic and either engineer a way to make this path deterministic (thereby defeating the purpose of the fabric technology) or hope that the path doesn’t change while they are monitoring.

The flexibility afforded by virtualization can cost dearly in security best practices. For instance, a typical security best practice is to “prune” vlans– removing these layer two networks from unnecessary switches in order to prevent unauthorized monitoring, man in the middle attacks, or network disruptions. But in the virtualized data center, this practice has become obsolete. Consider the act of moving a virtual server from one hypervisor to another. In order to accomplish this task, the vlan that the virtual machine lives on must exist as a network on an 802.1Q trunk line attached to both servers. If each of these servers is configured to handle any type of virtual machine within the data center, all vlans must exist on this trunk, and on all of the intermediary switches- producing substantial opportunities for security and technical failures, particularly in multitenant environments.

Prior to the introduction of data center fabrics, most network engineers segregated different types of traffic on different layer 2 domains, forcing all inter-vlan communication to ingress and egress through distribution layer routers. However, even this most basic of controls can be purposefully defeated with new technologies like VXLAN[iii] and NVGRE[iv]. These protocols allow hypervisor administrators to extend layer two domains from one hypervisor logically separated from another hypervisor with layer 3 boundaries by simply encapsulating the layer two traffic within another, abstracted layer two frame before being handed off to the layer 3 device. This obviates the security controls that the network provided, and even could even allow a vlan to be easily extended outside of a corporate network perimeter. This possibility illustrates yet another risk in virtualization technologies: separation of duties.

Management and monitoring

In the traditional data center, the separation of duties was easy to understand and segment. Network, systems, and storage engineers all worked together with minimal administrative overlap while security engineers formed a sort of protective membrane to ensure that this system stayed healthy. Yet in the virtual realm, all of these components intermingle and they can be logically managed from within the same few management interfaces. These interfaces allow systems engineers to make changes to networking or storage infrastructure, and network engineers to make changes to hypervisors or virtual machines, and so forth. As networks supporting virtual infrastructures have converged, storage management policies have blurred into the network management realm as switches and routers increasingly transport iSCSI or FCoE traffic in addition to traditional data or voice packets.

None of this would matter quite as much if these new technologies were easy to monitor. But without topographical consistency or separate management domains, monitoring becomes another interesting challenge. The old-fashioned data center typically relied upon at least three levels of monitoring: NetFlow for records of node-to-node data communications, SNMP, WMI or other heuristics-based analysis of hardware and software performance, and some type of centralized system and application logging mechanism such as Syslog or Windows Event Logs.

But in the virtualized data center, the whole doesn’t add up to the sum of its parts. While system logging remains unchanged, it stands alone as the only reliable way of monitoring system health. Meanwhile NetFlow and SNMP are crippled.  A few years ago, Netflow in virtualized environments was completely absent. VM to VM traffic that didn’t leave a hypervisor just simply didn’t create NetFlow records at all.  Responding to the issue, most vendors added some amount of Netflow accounting[v] for a price premium. However, the version implemented (v5) still does not support IPv6, MPLS, or VPLS flows. Furthermore, since PCI buses and PC motherboards are not designed for wire speed switching, there are reports that enabling NetFlow on hypervisor virtual switches can result in serious performance problems.[vi]

However, if NetFlow is successfully enabled on the virtual environment, there are still a few ways it can be broken. Once a VXLAN or NVGRE tunnel is established between two hypervisors, the data can be encrypted using SSL or IPSec. These flows will only be seen as layer three communications between two single hypervisors, even if in reality a dozen or more machines are speaking to an equivalent number on the remote hypervisor. The NVGRE/VXLAN problem combined with the new fabric architectures and the dynamic properties of virtual machines mean that heuristical analysis of virtual data center performance is much less feasible than in static data centers. In a static environment, security engineers can set thresholds for “typical” amounts of data transfers between various nodes. Once system administrators have the capability to dynamically distribute virtual machines across a data center, these numbers become meaningless, at least until a new baseline for traffic analysis is established.

A (Fabric)Path forward

So where does this leave the security engineers who’d like to at least try to keep a handle on the security of their most critical systems in a virtualized data center? Well, first, they can take comfort in that although these technologies are on the horizon, few companies have moved beyond the most basic of virtualization infrastructures. VXLAN and NVGRE are still IETF drafts, which means that even though Cisco, Microsoft and VMware already support them, the standards have not yet been widely adopted by other vendors. However, even if the equipment made by these vendors are in a data center, it’s likely that VXLAN or NVGRE are unnecessary for most organizations.[vii] Similarly, the new data center fabric architectures haven’t yet seen wide adoption because they cost enormous amounts of money[viii] and they require a massive data center overhaul- from equipment replacement to new fiber plants[ix].

Also, despite the numerous security gaps created by new virtualization technologies, data center equipment vendors are aware of the problems and engaging in finding solutions. Cisco released a bolt-on virtual switch[x] that can be utilized to separate the management of networking equipment from the virtual systems environment while also terminating VXLAN connections s to allow for security monitoring on VXLAN tunnels. A number of vendors have introduced virtual firewall appliances[xi] that can provide continuous protection even when virtual machines are moved out of the path of a physical firewall. Even SNMP/WMI monitoring gaps are being bridged by vendors who have developed virtualization-aware technologies[xii] that detect virtual machine locations and smooth baseline heuristics once a machine has migrated to another location.

So, all hope is not lost.  Depending on where the architecture team is in their research or implementation of these technologies, security engineers are likely to have an opportunity to get a seat at the table and will have a bourgeoning security toolkit at their disposal, which can help them to get a hold of the process before it gets out of hand.

The State of Information Security and Data Today

11 06 2012

If people, processes, and technology are what drive business activity, what are the methods of applying Information Security to these? In IT security terminology it is common to refer to the ‘CIA Triad’1 as a way to determine how the security policy, process, or technology is benefiting the enterprise.  The CIA Triad is composed of Confidentiality, Integrity, and Availability.  In simple terms, any security control or process is going to improve one or more of these key components.  Confidentiality of information deals with keeping information secret (e.g. encryption,) Integrity is accomplished by making sure that data that is transacted is accurate and not changed (e.g. digital signatures,) and Availability.

As the enterprise IT infrastructure in most organizations has matured, we’ve learned how to manage and maintain highly-available networks that support our business applications, users, and the data that they exchange.  Before we can talk about securing data in addition to making it highly available, we need to understand where our data is, and where it is going.

The world population as of 2010 was estimated to be 6,869,700,000. As of October 2011, that number has surpassed 7 billion people2 compared to 1970 where only 3.7 billion people existed. The population of earth has almost doubled in 40 years. In today’s highly connected world, people use and create data daily either at home, on the go, or at work.  It is estimated that at any given time there are 1.97 billion people actively using the Internet, which is about 28.7% penetration of the general population.  This statistic has increased 445% compared to measurements made in the year 2000. 3,4

Today’s people are connected and utilizing voice and data networks that reach almost every part of the world. They are able to transmit and receive rich data content via devices that use numerous technologies. (e.g., Fiber Optics, DWDM, Wireless, Broadband, DSL, Leased Lines, Ethernet, 3G-4G-LTE Cellular, WiFi, Satellite, Smart Grids, Peer-to-Peer, etc.)

The number of mobile phone subscriptions worldwide has surpassed five billion in 2011.5 These mobile devices are becoming more like computers as they allow people to create, view, and modify data. Additionally, people tend to have multiple devices such as desktops, laptops, tablets, set-tops, and other mobile devices such as smart phones. Each device allows them to create, view, and modify data. What’s more is that some devices in the home and in the workplace are creating and modifying data without any user interaction.  Examples of these are ‘Smart Grid’ enabled appliances, security and monitoring systems, GPS systems, game consoles, and even vehicles.

All of this data that is being created requires some form of storage and luckily the size, capacity, and cost of storage is decreasing.  For example, a Micro SD card smaller than a fingernail can currently hold up to 128 Gigabytes of data.6  Industry experts expect this form factor to reach 2 Terabytes in the near future.  In 2010, a Western Digital hard drive with a capacity of 1 terabyte cost approximately $72 or $0.08 per gigabyte.  Compared to 1980, where a Morrow Designs hard drive with a capacity of 26 megabytes cost approximately $5000, or $193 per megabyte, or $193,000 per gigabyte.  This means that (similar to Moore’s Law relative to computing power) storage capacity for data per dollar increases at slightly more than 100% each year.  All of this data is being stored in multiple places: ‘Cloud’ storage, DropBox, USB thumb drives, SD cards in smart phones, gaming systems, flash memory, portable hard drives, media devices, backup tapes, optical media, in persistent email systems like Google’s Gmail, offline storage mechanisms, and automobiles, etc.

We know that people need to create, view, and modify data in order to do work.  Traditional tools for doing this are typewriters, fax machines, word processors, databases, spreadsheets, and customized software applications.  For the most part, when data was created in the traditional home or workplace it was easier to control; data was saved to a database in one location, a form was printed and filed in one location, and a single copy of a document would be created and exchanged via a fax machine to the intended recipient.  The culture of how people use data has changed and the new technologies that people are using are making it easy for them to collaborate and share data. Examples of these new technologies are: Email, Instant messaging, Facebook, Google+, Linked-In, Skype, peer-to-peer file sharing, BitTorrent, blogs, Twitter, SMS, and an ever increasing array of mobile applications. People like to share data.

The data that we create at work and at home is growing at an alarming rate. “Every two days we create as much information as we did from the dawn of civilization up until 2003. That’s something like five exabytes of data.” – Eric Schmidt, Google, Inc. CEO, August 10th, 2010. The simple fact is that once data is created and then shared, it is hard, if not impossible to control.

Fortunately, the bulk of the data that your organization creates is benign; the data is the ‘get-it-done’ type of conversational day-to-day work.  This data is email relating to a late project, it’s an instant message to a peer regarding when he/she will be available for lunch, and it’s a spreadsheet of data that is important to a project team to measure quality but meaningless to anyone else who would look at it out of context.  This get-it-done data turns out to be ~85% of an organization’s data; it is unclassified data.  (The caveat to this is that if context can be determined by collecting this get-it-done data over a long period of time, and making sense of it as some attackers/threats may be capable of doing, this data can be used maliciously.)

The next type of data that organizations generate is data that relates to finances, and to people (personally identifiable information or PII).  To simplify this concept we’ll relate to this a ‘regulatory’ data as there are existing laws, and new laws being created on how this data should be treated. If your business handles credit card data, it needs to be certified annually with the payment card industry or your business will have to pay penalties and suffer damage in the press. If one of your business applications handles invoices and inventory that is greater than a billion dollars in a given year, you’ll have to comply with Sarbanes-Oxley regulatory requirements.  If your company maintains databases of employees or customers and happens to lose track of where a back-up tape of the data went, or if a breach occurs where this information was accessible for any given amount of time, you’ll need to comply with public disclosure of the incident.  These requirements surfaced beginning in the early 2000’s and have largely been dealt with within organizations. The applications that deal with this type of data have controls in place that restrict access, encrypt the data, and report on who has accessed the data at any given time.  Problems still occur with the handling of this data, and penalties from fines, brand damage from public disclosure, and lower stock performance due to disclosure to the financial markets are things that businesses have to deal with. For the most part though, businesses have adapted and learned how to deal with this type of data over the past decade by putting the appropriate controls in place and building strong audit and compliance programs to maintain the security of this data and the required regulatory transparency.

The type of data that needs the most attention is sensitive data.  This data is what should be considered secret, classified as such, and only accessible by individuals within the organization that have the need-to-know.  This data involves the organization’s competitive advantage, such as financial planning documents, strategic planning of resources, expensive research documentation and design specifications, special ‘recipes’ for how something is constructed, etc. This type of data has been largely trusted to people to manage and it is thought that it is safe by most people in the organization because, ”it must be safe!” Or at least that is what people with need-to-know access believe when interviewed about how strong their application controls are (that restrict access to the systems that manipulate the secret data.)  The truth is that the classification of secret data is subjective (unless you’re Coca-Cola, with one “Secret Sauce” recipe locked behind fifteen doors under a mountain that nobody knows the location of); most people creating and using secret data don’t understand how to classify and control it, or don’t realize the ramifications of mishandling the data.  Traditionally organizations that needed to deal with secret and top-secret information were the military and some organizations within governments.  They have the advantage of strong Role-Based Access Controls (RBAC) and all of their processes take into account the concept of need-to-know and least privilege. Organizations today don’t have the resources to effectively handle strict RBAC, the data is online and digital and not in a vault, and the people coming into the workplace are expecting to share information in order to do their work. Fortunately, this type of data ends up being less than ~5% of the average organization’s data (in most organizations.)  If we can identify which applications process this type of data, we can make better decisions on how to secure and monitor the data and the people who access it.

The state of information security today should be one of major concern for all businesses.  In the past we’ve seen viruses, spam, website defacement, and theft of credit card information.  All of these still occur today; the IT industry has gotten better at dealing with them with anti-virus and anti-malware software, email spam filtering, and intrusion detection appliances (IDS/IPS).  What has changed is that it is now profitable for a ‘bad actor’ (a hacker or group of hackers working together) to steal your information and sell it for money.  There is a market for selling credit card information, personal information, social security numbers, etc.  The range of risk to your organization from ‘bad actors’ begins with the traditional hackers that cause mayhem, individual ‘black hat’ hackers that work for hire or individually to penetrate your network and steal information or sell the vulnerability or door (what is known as a 0-day attack, or exploit that’s never been seen in the wild before) into your organization, networks of hackers that support ‘hactivisim’ target organizations on a whim and can collaborate to cause greater damage to an organization (e.g. the Anonymous, Lulzsec groups), organized cyber-crime, nation state espionage and theft of intellectual property, and finally cyber terrorism.  These individuals or groups use a combination of different methods to attack your company’s network and applications. Taking an aggressive security posture against the biggest ‘bad actors’ such as nation-states is the wrong approach as these entities have much more money and resources than most organizations are willing to spend; individual governments are realizing this and building their own organizations that can deal with this level of threat.  It is now common to hear the term ‘cyber-warfare’ in the media, and even some of our IT giants are starting to deal with it as well. For example, Google has issued a press release that describes that it will start informing its Gmail users if a particular message could be originating from a state-sponsored type of campaign/attack.7 Microsoft as well is in the news for being responsible for allowing a low security certificate to be trusted so that the newly discovered state-sponsored malware known as ‘Flame’ could be trusted to be installed by Microsoft’s own Windows updating mechanism.8

In order to protect data, businesses today need to identify where they are exposed to risk in their IT systems and infrastructure and make decisions on how to leverage new architectures and security controls in order to reduce the likelihood of a successful attack. (i.e., don’t throw security devices and controls everywhere, think about re-locating and grouping your applications that handle sensitive data together and put the investment in security there.) Businesses also need to invest in strong Security Incident and Event Management (or SIEM) processes and tools.  These tools and people with the skills to use them are newly emerging, in high demand, and in many ways we are heading into uncharted waters with respect to how much data we have to keep track of.  It may be an option to leverage outsourced security to help improve your security program, build new operational processes, and help train your existing security professionals; but that may only be part of the answer.  Knowing that we can never achieve 100% security or avoidance of risk, teaching your security professionals to further think in terms of risk management every day is necessary.  Your security professionals should be working with the business to develop and document plans for what to do when an application or system and its data is compromised.


[1] – The CIA Triad:

[2] United States Census Bureau – World Population Estimate:

[3] World Bank, World Development Indicators


[5] AP / U.N. Telecommunications Agency –

[6] — Intel & Micron Joint Venture, 2011:

[7] — Google starts warning users of state-sponsored computer attacks:

[8] — Microsoft certificate used to sign Flame malware, issues warning: