The State of Information Security and Data Today

11 06 2012

If people, processes, and technology are what drive business activity, what are the methods of applying Information Security to these? In IT security terminology it is common to refer to the ‘CIA Triad’1 as a way to determine how the security policy, process, or technology is benefiting the enterprise.  The CIA Triad is composed of Confidentiality, Integrity, and Availability.  In simple terms, any security control or process is going to improve one or more of these key components.  Confidentiality of information deals with keeping information secret (e.g. encryption,) Integrity is accomplished by making sure that data that is transacted is accurate and not changed (e.g. digital signatures,) and Availability.

As the enterprise IT infrastructure in most organizations has matured, we’ve learned how to manage and maintain highly-available networks that support our business applications, users, and the data that they exchange.  Before we can talk about securing data in addition to making it highly available, we need to understand where our data is, and where it is going.

The world population as of 2010 was estimated to be 6,869,700,000. As of October 2011, that number has surpassed 7 billion people2 compared to 1970 where only 3.7 billion people existed. The population of earth has almost doubled in 40 years. In today’s highly connected world, people use and create data daily either at home, on the go, or at work.  It is estimated that at any given time there are 1.97 billion people actively using the Internet, which is about 28.7% penetration of the general population.  This statistic has increased 445% compared to measurements made in the year 2000. 3,4

Today’s people are connected and utilizing voice and data networks that reach almost every part of the world. They are able to transmit and receive rich data content via devices that use numerous technologies. (e.g., Fiber Optics, DWDM, Wireless, Broadband, DSL, Leased Lines, Ethernet, 3G-4G-LTE Cellular, WiFi, Satellite, Smart Grids, Peer-to-Peer, etc.)

The number of mobile phone subscriptions worldwide has surpassed five billion in 2011.5 These mobile devices are becoming more like computers as they allow people to create, view, and modify data. Additionally, people tend to have multiple devices such as desktops, laptops, tablets, set-tops, and other mobile devices such as smart phones. Each device allows them to create, view, and modify data. What’s more is that some devices in the home and in the workplace are creating and modifying data without any user interaction.  Examples of these are ‘Smart Grid’ enabled appliances, security and monitoring systems, GPS systems, game consoles, and even vehicles.

All of this data that is being created requires some form of storage and luckily the size, capacity, and cost of storage is decreasing.  For example, a Micro SD card smaller than a fingernail can currently hold up to 128 Gigabytes of data.6  Industry experts expect this form factor to reach 2 Terabytes in the near future.  In 2010, a Western Digital hard drive with a capacity of 1 terabyte cost approximately $72 or $0.08 per gigabyte.  Compared to 1980, where a Morrow Designs hard drive with a capacity of 26 megabytes cost approximately $5000, or $193 per megabyte, or $193,000 per gigabyte.  This means that (similar to Moore’s Law relative to computing power) storage capacity for data per dollar increases at slightly more than 100% each year.  All of this data is being stored in multiple places: ‘Cloud’ storage, DropBox, USB thumb drives, SD cards in smart phones, gaming systems, flash memory, portable hard drives, media devices, backup tapes, optical media, in persistent email systems like Google’s Gmail, offline storage mechanisms, and automobiles, etc.

We know that people need to create, view, and modify data in order to do work.  Traditional tools for doing this are typewriters, fax machines, word processors, databases, spreadsheets, and customized software applications.  For the most part, when data was created in the traditional home or workplace it was easier to control; data was saved to a database in one location, a form was printed and filed in one location, and a single copy of a document would be created and exchanged via a fax machine to the intended recipient.  The culture of how people use data has changed and the new technologies that people are using are making it easy for them to collaborate and share data. Examples of these new technologies are: Email, Instant messaging, Facebook, Google+, Linked-In, Skype, peer-to-peer file sharing, BitTorrent, blogs, Twitter, SMS, and an ever increasing array of mobile applications. People like to share data.

The data that we create at work and at home is growing at an alarming rate. “Every two days we create as much information as we did from the dawn of civilization up until 2003. That’s something like five exabytes of data.” – Eric Schmidt, Google, Inc. CEO, August 10th, 2010. The simple fact is that once data is created and then shared, it is hard, if not impossible to control.

Fortunately, the bulk of the data that your organization creates is benign; the data is the ‘get-it-done’ type of conversational day-to-day work.  This data is email relating to a late project, it’s an instant message to a peer regarding when he/she will be available for lunch, and it’s a spreadsheet of data that is important to a project team to measure quality but meaningless to anyone else who would look at it out of context.  This get-it-done data turns out to be ~85% of an organization’s data; it is unclassified data.  (The caveat to this is that if context can be determined by collecting this get-it-done data over a long period of time, and making sense of it as some attackers/threats may be capable of doing, this data can be used maliciously.)

The next type of data that organizations generate is data that relates to finances, and to people (personally identifiable information or PII).  To simplify this concept we’ll relate to this a ‘regulatory’ data as there are existing laws, and new laws being created on how this data should be treated. If your business handles credit card data, it needs to be certified annually with the payment card industry or your business will have to pay penalties and suffer damage in the press. If one of your business applications handles invoices and inventory that is greater than a billion dollars in a given year, you’ll have to comply with Sarbanes-Oxley regulatory requirements.  If your company maintains databases of employees or customers and happens to lose track of where a back-up tape of the data went, or if a breach occurs where this information was accessible for any given amount of time, you’ll need to comply with public disclosure of the incident.  These requirements surfaced beginning in the early 2000’s and have largely been dealt with within organizations. The applications that deal with this type of data have controls in place that restrict access, encrypt the data, and report on who has accessed the data at any given time.  Problems still occur with the handling of this data, and penalties from fines, brand damage from public disclosure, and lower stock performance due to disclosure to the financial markets are things that businesses have to deal with. For the most part though, businesses have adapted and learned how to deal with this type of data over the past decade by putting the appropriate controls in place and building strong audit and compliance programs to maintain the security of this data and the required regulatory transparency.

The type of data that needs the most attention is sensitive data.  This data is what should be considered secret, classified as such, and only accessible by individuals within the organization that have the need-to-know.  This data involves the organization’s competitive advantage, such as financial planning documents, strategic planning of resources, expensive research documentation and design specifications, special ‘recipes’ for how something is constructed, etc. This type of data has been largely trusted to people to manage and it is thought that it is safe by most people in the organization because, ”it must be safe!” Or at least that is what people with need-to-know access believe when interviewed about how strong their application controls are (that restrict access to the systems that manipulate the secret data.)  The truth is that the classification of secret data is subjective (unless you’re Coca-Cola, with one “Secret Sauce” recipe locked behind fifteen doors under a mountain that nobody knows the location of); most people creating and using secret data don’t understand how to classify and control it, or don’t realize the ramifications of mishandling the data.  Traditionally organizations that needed to deal with secret and top-secret information were the military and some organizations within governments.  They have the advantage of strong Role-Based Access Controls (RBAC) and all of their processes take into account the concept of need-to-know and least privilege. Organizations today don’t have the resources to effectively handle strict RBAC, the data is online and digital and not in a vault, and the people coming into the workplace are expecting to share information in order to do their work. Fortunately, this type of data ends up being less than ~5% of the average organization’s data (in most organizations.)  If we can identify which applications process this type of data, we can make better decisions on how to secure and monitor the data and the people who access it.

The state of information security today should be one of major concern for all businesses.  In the past we’ve seen viruses, spam, website defacement, and theft of credit card information.  All of these still occur today; the IT industry has gotten better at dealing with them with anti-virus and anti-malware software, email spam filtering, and intrusion detection appliances (IDS/IPS).  What has changed is that it is now profitable for a ‘bad actor’ (a hacker or group of hackers working together) to steal your information and sell it for money.  There is a market for selling credit card information, personal information, social security numbers, etc.  The range of risk to your organization from ‘bad actors’ begins with the traditional hackers that cause mayhem, individual ‘black hat’ hackers that work for hire or individually to penetrate your network and steal information or sell the vulnerability or door (what is known as a 0-day attack, or exploit that’s never been seen in the wild before) into your organization, networks of hackers that support ‘hactivisim’ target organizations on a whim and can collaborate to cause greater damage to an organization (e.g. the Anonymous, Lulzsec groups), organized cyber-crime, nation state espionage and theft of intellectual property, and finally cyber terrorism.  These individuals or groups use a combination of different methods to attack your company’s network and applications. Taking an aggressive security posture against the biggest ‘bad actors’ such as nation-states is the wrong approach as these entities have much more money and resources than most organizations are willing to spend; individual governments are realizing this and building their own organizations that can deal with this level of threat.  It is now common to hear the term ‘cyber-warfare’ in the media, and even some of our IT giants are starting to deal with it as well. For example, Google has issued a press release that describes that it will start informing its Gmail users if a particular message could be originating from a state-sponsored type of campaign/attack.7 Microsoft as well is in the news for being responsible for allowing a low security certificate to be trusted so that the newly discovered state-sponsored malware known as ‘Flame’ could be trusted to be installed by Microsoft’s own Windows updating mechanism.8

In order to protect data, businesses today need to identify where they are exposed to risk in their IT systems and infrastructure and make decisions on how to leverage new architectures and security controls in order to reduce the likelihood of a successful attack. (i.e., don’t throw security devices and controls everywhere, think about re-locating and grouping your applications that handle sensitive data together and put the investment in security there.) Businesses also need to invest in strong Security Incident and Event Management (or SIEM) processes and tools.  These tools and people with the skills to use them are newly emerging, in high demand, and in many ways we are heading into uncharted waters with respect to how much data we have to keep track of.  It may be an option to leverage outsourced security to help improve your security program, build new operational processes, and help train your existing security professionals; but that may only be part of the answer.  Knowing that we can never achieve 100% security or avoidance of risk, teaching your security professionals to further think in terms of risk management every day is necessary.  Your security professionals should be working with the business to develop and document plans for what to do when an application or system and its data is compromised.


[1] – The CIA Triad:

[2] United States Census Bureau – World Population Estimate:

[3] World Bank, World Development Indicators


[5] AP / U.N. Telecommunications Agency –

[6] — Intel & Micron Joint Venture, 2011:

[7] — Google starts warning users of state-sponsored computer attacks:

[8] — Microsoft certificate used to sign Flame malware, issues warning:




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: