Virtualization Concerns and the Cloud

2 04 2013


With the sonic boom of Cloud, Today’s CIOs are tripping over themselves to hastily push as many of their technology initiatives as possible into the Cloud.  Cloud computing has become the modern day penicillin for technology challenges allowing the Enterprise CIO to mitigate CAPEX, leverage performance driven offerings and transfer the operation, management and maintenance of infrastructure to a Cloud services provider.  While on the surface this sounds like motherhood and apple pie, it is imperative that IT leaders peel back the layers and ensure they know exactly what they are, and aren’t, buying and understand the underlying platforms supporting their IT systems and how they are delivered.  Virtualization, especially on shared platforms, inherently has a breadth of security concerns that require on-going management, regardless of whether systems are insourced or outsourced.  Most importantly, as security professionals, it is our job to ensure our companies aren’t sacrificing security principles while migrating into this modern day consumption model.

Exploring Some Basic Virtualization Concerns

The vast majority of Cloud products are built on a layer of clustered physical hardware running a hypervisor that enables  virtualization, or support for multiple virtual machines (VMs), on a single physical chassis.  Thus, the hypervisor abstracts the hardware and software layers allowing the underlying server infrastructure to provide the processing power and memory while, in many cases, the VMs cohabitate on both  (Vasudevan, McCune and Qu).  Due to the structure and interaction of the VMs with the hypervisor and then the hypervisor with the common hardware elements of the server’s computing architecture, there is constant concern around the potential vulnerabilities that this common binding can create.  Some of the key areas of concern rooted in the virtualization hierarchy include: 1) Trojaned virtual machines, 2) improperly configured security, 3) improperly configured hypervisors and 4) data leakage through offline images (Cox).  A brief outline each of these potential vulnerabilities will be provided so a better understanding of the risk can be gained.

A key concern for any virtualized Cloud consumer should be Trojan machines.  A Trojan machine can take one of many different forms, but for the purposes of this blog include an infected VM image file, a VM image file that contains malicious code or a VM sharing a common physical platform for the purposes of reconnaissance and/or attacking another VM.  A recent Trojan, Crisis, discovered by Kapersky Labs in July 2012, is able to replicate itself into a VM instance (Rashid).  While this was the first known Trojan that specifically attacks virtual machines, it is a reasonable assumption that this will be a lucrative target moving forward (Rashid).  Additionally, attacks aren’t always malicious or intentional.  It was reported that Amazon Cloud  customers determined that one of the Amazon Machine Images (AMIs) that are part of their community image library, used as base images for their Cloud service, was compromised (Cox).  In this case, the image was not intentionally being distributed, yet, customers were using a compromised machine from their initial installation. Lastly, it is conceivable that a virtual machine running on a common hypervisor and physical platform could allow a malicious operator to gain knowledge about a target VM, or even gain access to its contents.  Further, a post doctoral researcher in MIT’s Computer Science and Artificial Intelligence lab, and three of his colleagues, claimed that a snooper could land on the same VM host on Amazon’s cloud service by launching their VM at the same time as the target (Babcock).  In order to ensure proper VM behavior and optimize security, it is critical that VMs are built from known good source images, monitored for proper behavior and that shared platform exposures and understood.

While VMs create a separation between the hardware and software layers of the server infrastructure, they have also blurred the long standing boundaries between the Server Administrator and Network Engineer.  Traditionally, the network engineer has been more focused on the proper movement of data across the corporate network and, in many cases, many of the associated security functions; on the other hand,  the System Administrator has simply cared for the operation and maintenance of the server hardware.  Due to the full stack integration brought about by virtual machines and hypervisors, ranging from the server down through the network, many of the functions that have traditionally fallen to the Network Engineer, such as VLAN tagging, QoS, routing and access-lists, have now fallen to the administrators of the virtual environment  – who may or may not possess the qualifications to properly secure the environment (Cox).  The result is a gap requiring a “super IT administrator” in order to have  the breadth and depth in skills in order to fulfill all of the skill requirements to deliver virtual services using best practices.  Alternatively, the hypervisor itself needs to allow for multiple Administrators and Engineers collaborating together to optimally build and deliver a virtual machine.  Without ensuring the right skills are performing the right tasks, the same integration of functions that make VMs and Cloud an ideal environment can also convolute the management and security of the infrastructure.

While the hypervisor performs the “magic” that allows our VMs to share a common set of physical platforms, it is also a core and obvious area of exposure if not properly configured and secured.  Because the hypervisor management platform authorizes who can create, delete and change virtual machines, as well as, the accessibility between various VMs and common resources, it is critical that the hypervisor and the management platform are properly secured within a Cloud environment (Scarfone, Souppaya and Hoffman).  The first level of securing the hypervisor should address user management and access control; the ability to authenticate should provide a front line of defense to ensure scalable and tiered levels of access to Cloud administrators.  Further, authorization should restrict administrator’s access to only the resources they directly administer to reduce insider threat, mitigate risk of misconfigurations and prevent unnecessary access.  Secondly, while there are some communication functions that that are required for the basic operation of the VM, it is important that policies exist to define the rules for intra-customer VM to VM access, as well as, inter-customer  VM interactions.  Further it is critical that the policies are established and followed and then reviewed and maintained regularly.

The last concern to review is the snapshot and VM image management exposure posed by VMs and the hypervisor.  Since VMs are hardware independent, Cloud administrators have the ability to pause, stop and even take snapshots, a real-time image of the entire VM – including the contents of the memory – and store it in a file (Cox).  Since these files can be stored, moved, duplicated, etc., if the VM or memory contained sensitive or confidential data, it could create a significant risk for an organization.  If a Cloud provider is taking regular snapshots for a customer, the handling and storage of this information could pose a greater threat than the active VM itself.  Also, it is critical that VMs are properly monitored and/or decommissioned after use, as well as, VM image files are diligently secured commensurate with the level of sensitivity of the data contained within the VM (Bateman).  Ironically, the greatest expose created by the VM may not be the active VM itself, but instead the offline data that also requires full lifecycle security considerations.

So Should I Cloud?

Now that some of the primary security concerns for virtualized Cloud computing have been established and discussed, the key question is – now what?  Can Cloud computing even be trusted?  The answer is — whether a Cloud provider is used, or an Enterprise builds and hosts their own virtualized infrastructure, similar risks will exist and need to be managed.  The primary difference lies in the level of control that you, as a customer of the Cloud provider, posses to ensure that your IT resources are being managed, maintained and secured using best practices and within your policy and compliancy requirements.  No different than the enterprise, Cloud providers need to define policies, procedures and controls to manage and mitigate risks that exist with a delivering a virtualized solution.  Further, these initiatives should be clearly documented and systematically communicated to their customers.

IT leadership considering  Cloud outsourcing solutions need to understand the resources they are considering shifting to the Cloud.  This includes an awareness of all aspects of the systems including the type(and sensitivity) of the data, specific compliance requirements that might exist (e.g. PCI), recovery time and recovery point targets, the impact of downtime, the providers security practices, etc.  From there, a thorough review of the available Cloud solutions should also be conducted.  The review should include and align the critical elements identified when reviewing the systems to be outsourced.  Additionally, an understanding of how the provider sets policies, what their standard policies are and whether any customer specific flexibility exists should also be determined.  Finally, the criteria and requirements can be aligned with the potential scope of Cloud providers.  It is only from this perspective that educated decisions can be made and needs can be aligned with offerings.

Cloud is not the enemy, but more an inevitable shift in how IT resources are consumed by the Enterprise.  Awareness of what the cloud is, and isn’t, is essential; Cloud shouldn’t be a black box where you put all of your systems and data and then believe that all of your risk and exposure is gone.  Security concerns don’t dissolve into the ether by moving your systems to the Cloud.  Arguably, they are greater as the Enterprise’s focus should shift to validation and verification of the security posture of their Cloud environment.  After all, you never know who may be sharing the same virtual plane with you on the physical device hosting your VMs.


Babcock, Charles. Information Week. 27 April 2012. <;.

Bateman, Kayleigh. Don’t Let Dormant Virtual Machines Threaten Data Centre Security. n.d. <;.

Cox, Philip. Top Security Virtualization Risks and How to Prevent Them. Islandia, 2011.

Rashid, Fahmida Y. Security Watch. 21 August 2012. <;.

Scarfone, Karen, Murungia Souppaya and Paul Hoffman. Guide to Security for Full Virtualization Technolgies. Gaithersburg: NIST, 2011.

Vasudevan, Amit, et al. Requirements for an Integrity-Protected Hypervisor on the X86 Hardware Virtualized Architecture. n.d.

Wi-Fi’s WPA Hacked… Again

31 03 2013


Since its implementation, Wi-Fi has had a troubled time establishing a reliable encryption standard despite its exponential growth in popularity among businesses and casual users alike. After the epic failure of the Wired Equivalent Privacy (WEP) algorithm in 2001 due to weak and predictable encryption methods, a new encryption standard was needed to pick up where WEP had failed (Borisov, Goldberg and Wagner). The Wi-Fi Alliance’s Wi-Fi Protected Access (WPA) and the Institute of Electrical and Electronics Engineers’ (IEEE) WPA2 standard, which provided stronger encryption and mutual authentication, was supposed to be the answer to all of our Wi-Fi woes (Wi-Fi Alliance 2). It has done a decent job; at least until the Wi-Fi Protected Setup (WPS) feature was introduced. This is a great example of how tipping the scale in favor of convenience rather than security didn’t work out so well.

A Brief Background on WPA/2

For the scope of this discussion, I will only be addressing the personal pre-shared key (PSK) flavor of WPA. While WPA and WPA2 are indeed much more robust security mechanisms than their predecessor, WEP, they do have problems of their own. Both implementations of WPA use a 4-way handshake for key exchange and authentication. WPA utilizes a constantly changing temporary session key known as a Pairwise Transient Key (PTK) derived from the original passphrase in order to deter cryptanalysis and replay attacks. During this process the user-selected PSK is input into a formula along with the Service Set Identifier (SSID) of the given network, and the SSID length and then hashed 4096 times to derive a 256-bit Pairwise Master Key (PMK). Another function is performed on the PMK using two nonce values and the two Media Access Control (MAC) addresses from the access point and the client which in turn generates a PTK on both devices (Moskowitz). These PTKs are then used to generate encryption keys to encrypt further communications (Wi-Fi Alliance). The problem is that this 4-way handshake can unfortunately be observed by a third party. If an outside device captures the handshake, then the two MAC addresses, nonce values, and the cipher suite used can be obtained. The PTK can then be generated by the outsider (Moskowitz). A dictionary or brute force attack can then be run against the PTK to find the corresponding original PSK it was derived from. Therefore choosing a weak password significantly reduces the effectiveness of WPA and greatly increases the chances that your PSK will be discovered.

Then Came WPS

In 2007 the Wi-Fi Alliance decided to make connecting to WPA enabled networks easier for home users and developed the WPS specification. Their goal was to promote best practices for security while providing ease of use (Wi-Fi Alliance 1) for home users. Essentially they accomplished this by creating a backdoor into your WPA enabled network.

WPS comes in two modes of operation, a push-button-connect mode and a personal identification number (PIN) mode. Furthermore the PIN mode is split into two subcategories, an internal registrar and external registrar mode (Viehböck 3-4). While the push-button mode has security implications of its own, we are going to focus on the external registrar PIN mode of operation.

This is Where Things Get Interesting

The external registrar PIN mode of operation only requires that a foreign wireless device send an 8 digit PIN that matches the 8 digit PIN set on the WPS-enabled access point or external registrar used to authenticate WPS clients. If the PIN that was sent matches, the access point or registrar responds with the PSK needed to authenticate to the network. Thus, the security of a WPA2 enabled network even with a strong 60 character passphrase could potentially be compromised by exploiting an 8 digit PIN. To add insult to injury, the 8 digit PIN is actually 7 digits, with the eighth digit being a checksum of the previous 7. The 8 digits are then split in half during transmission, with digits 1-4 being the first half of the PIN and 5-8 being the second half. During PIN authentication each half of the PIN is sent and authenticated separately. Based on the response given by the access point or registrar for a submitted PIN, an attacker can determine if the first and second halves were correct or incorrect independently of each other. At this point, to gain unauthorized access to the network, you essentially just need to brute force two 4-digit PINs or 104 + 104. That’s only 20,000 possible combinations. Additionally, since the eighth digit of the PIN is a checksum, you really only have a maximum of 104 + 103, or 11,000 possible values to brute force (Viehböck 4-6). Keep in mind that this has nothing to do with the strength of your actual WPA passphrase. The most disturbing implications of this are that an otherwise well-secured, unfeasibly penetrable WPA-PSK network could still be easily compromised by guessing 1 of 11,000 possible values.

What Devices are Affected by This?

This attack was published in late 2011 and unfortunately the vast majority of small office/home office (SOHO) wireless routers in use remain vulnerable. Additionally, most of the wireless routers and access points on the market have this WPS feature enabled by default and with certain vendors the user isn’t even given the option to disable it! Wireless router vendors have been notified of this vulnerability and some vendors have already released firmware updates disabling the WPS PIN feature by default and in some cases giving the user the option to disable it (Viehböck 9). The problem is that the average home user will probably not routinely update their router firmware and may remain vulnerable indefinitely. A recent scan using Wash, a tool which is used to identify WPA networks which are vulnerable to this attack, revealed 14 vulnerable SSIDs within close proximity to my home. There is also a spreadsheet of known vulnerable devices hosted on Google Docs (WPS Flaw Vulnerable Devices).

How to Protect Yourself

Update your router or access point to the latest firmware available and completely disable the WPS feature. If your device will not let you disable WPS, contact your vendor or consider purchasing a device that will let you. Also, it couldn’t hurt to run the Wash tool and see if your network is listed as being vulnerable. If you want to take it one step further, the Reaver tool will enable you to run the WPS PIN attack against your own network to determine if you are indeed susceptible to this vulnerability.


Borisov, Nikita, Ian Goldberg and David Wagner. “Security of the WEP Algorithm.” n.d. (In)Security of the WEP algorithm. 16 February 2013.

Moskowitz, Robert. Weakness in Passphrase Choice in WPA Interface. 4 November 2003. 17 February 2013. <;.

Viehböck, Stefan. Brute forcing Wi-Fi Protected Setup. 26 December 2011. Document.

Wi-Fi Alliance. “State of Wi-Fi Security.” January 2012. Wi-Fi Alliance. Document. 16 February 2013. <;.

—. Wi-Fi Certified Wi-Fi Protected Setup. December 2010. Document.

“WPS Flaw Vulnerable Devices.” n.d. Document. 17 February 2013. <;.

Cloud Storage and Privacy: How much are you willing to pay to protect your data?

28 03 2013


We have all been warned that our Internet purchasing habits and how much we share about our day-to-day lives could be placing ourselves at risk of being victimized.  However, in a recent study published by the European Network and Information Security Agency (ENISA), users, even those who have elevated concerns over privacy, do not heed some of these warnings.  In this study, a majority of consumers were willing to submit personal contact information for a mere 67¢ discount on a  $10.05 online-purchase (XRates) (N., Preibusch and Harasser).   So the question is no longer whether or not a user is willing to share information in exchange for discounts, it is how much information is he likely to share in exchange for discounted services?   This blog explores this question as applied to the adoption of cloud-based storage services.

How much privacy are we willing to sacrifice?

As evidenced by the ENISA study, it turns out that most people have a price, and if this study is any indication, the price is not very high.  In many cases, in the presence “free services”, many of us are willing to supply employment history, email addresses of our closest family and friends, phone numbers, birthdays, and political views – to name a few.   So, it turns out that nothing really is for free.  The price paid for free services is personal information that can be used to support targeted advertising revenue – based on your observed behaviors, spending patterns, your political, social, and financial associations, and more importantly, who you know.  By allowing service providers to observe you, they are able to develop a personal profile that can be sold to their ‘affiliates’ (Google).  To gain insight into what is being sold to affiliates, this author conducted a simple experiment using the third-party plug-in, PrivacyFix,  a tool that estimates the advertising value of Google account profiles.   For the experiment, the Google account was configured with a public profile (Google+) to include links to employment history, more than 100 friends, colleagues, and family, and an association with Carnegie Mellon University.  Even configured to blocks placed on most tracking mechanisms, this Google Plus account allows Google to track 55% of pages visited, and is valued at $25.30 per year in advertising (Anonymous).   This $25.30 subsidizes the free services Google provides, effectively offsetting the pricing of paid services.

Cloud based storage services are based on this same model – Google, DropBox, and (to name a few) offer free cloud-based storage services with options to increase your capacity.  Basic service starts at 5 GBytes, with increasing levels of storage capacity awarded through new customer referrals (e.g. family, friends, and colleagues) (DropBox).  For capacity needs beyond the default 5GByte level, subscription prices start for as little as $.17 per GByte per year (DollyDrive), and include “free” add-on services to support backup and recovery, revision control, but most importantly:  data sharing and collaboration.  Data sharing and collaboration promotes expansion of the customer base, but also promotes vendor lock by virtue of a shared infrastructure.

Strengths & Weaknesses of Commercial Cloud Storage Options

In spite of these somewhat troubling privacy concerns, new Cloud Storage service providers seem to be popping up each year, and while the cost of paid services still offered at a higher price point than local storage, there are some compelling reasons for migrating to the cloud in some cases.

Table 1 identifies some of the key strengths of weaknesses of todays cloud storage solutions as compared to local storage alone.  For most consumers, the key strengths that differentiate cloud storage from local storage (without software & hardware capital investment) is the infrastructure that supports collaboration and the ability to backup and restore data to an offsite location.

Strengths Weaknesses
Increased productivity – data can be seamlessly accessed across devices and operating systems (DropBox). Data Transfer Latency.  As compared to local data transfers, digital transfer technology can be 6800 times slower.[1]
Ease of setup and use.  Many cloud storage service providers include operating system plug-ins to provide accessible cloud storage as a locally mapped storage device. Confidential Information such as your name, likeness, age, email addresses and names of colleagues and friends, and unencrypted data may be shared with unknown third parties (Google).
Flexible Pricing.  Services range from free, to referral based, to pay as you go, to subscription based services (DropBox). Limited liability policies.  Many service providers require that the customer indemnify the service provider against claims for damage (Google).
Data Revision Recovery.  Many services provide the ability to track changes and recover previously saved versions of files (Dolly Drive). Dependency on external provider.  Service Provider may reserve the right to change the terms of agreements at any time (including the right to suspend or discontinue services) (Google).
Data Sharing & Collaboration.  Shared data can be configured to automatically replicate across subscribed devices and users, facilitating improved productivity for shared data (DropBox). Variable Security.  While security and redundancy can be built into any given platform, each provider balances differing sets of quality attributes, which may expose users at unintended vulnerabilities (Borgmann, T. and Herfert).
Elasticity.  Cloud storage capacity is resizable without the need for capital investment. Service switching Interoperability.   Switching service providers is possible; however, some providers deliver unique services, which are not easily transportable to a new service provider (e.g. Dolly Drive Backup versus Microsoft Azure).
Off-site storage.  In the event of catastrophic loss of local storage and processing hardware, Cloud based storage provides a low-hurdle alternative to backup and safe storage. Pricing for paid services.  In 2013, local hard-disk storage cost less expensive than cloud-based storage[2].

Table 1, Cloud Storage Strengths and Weakness

The big weaknesses are the limited liability and the potential exposure and spillage of confidential information.  Data Transfer latency, while not a show-stopper, is a significant hurdle to more wide-spread adoption, especially in light of the fact that the average data transfer rates in the United States are nearly 6800-times slower than local disk access (Streams) (Seagate).  Some mitigation strategies exist, such as pre-seeding data stores to mitigate latency, however, this remains to be a significant hurdle for some users.  If we assume that the ENISA study represents a predictive model for cloud storage adoption, then liability and confidentiality are not viewed as weaknesses, so the only weakness that really stands in the path of widespread adoption is price. Today, pricing of cloud-base storage for consumer level plans is about 4 times that of  than local storage (assuming that the average user capitalizes the cost of hard disk space every two years), generally starting at $.17 per GByte per year[1].

Moore’s Law and Storage

Now, if we take into account the pricing history of hard drives and capacity over the last thirty years (Figure 1 and Figure 2), we note that there is a close correlation to Moore’s Law.[2]  Note that in the years between 1992 and 2012 two years, the cost per Megabyte and drops by half every two years.  While it is too early to definitively predict, early evidence does suggest that Moore’s law may prove to predict the future of pricing for Cloud-based storage.  Just since 2011, the starting capacity for free services have doubled, and the pricing on paid services has dropped by half[3].


Deciding How Much to Adopt

While most users are likely continue using only the “free services” until such time that the price point for paid services drops below the cost of purchasing new hardware, the other strengths referenced in Table 1 may drive early adopters to migrate toward cloud-based storage solutions sooner.  For these early adopters, a cost-decision model may help to identify and quantify relevant economic facets.  Such a decision model would quantify up-front costs, annual investments costs, and operational costs to arrive at a total cost of ownership (Bibi, Katsaros and Bozanis):

TCO/Yr = Cu + Cad + Co

Where Cu are the total upfront costs (enrollment fees and setup, acquisition of hardware and software), Cad are annual investment (annual subscription fees and maintenance fees), and Co represents operational costs, such as annual Internet connection costs, utilities, and in some cases the cost of off-site storage and travel.


Anonymous. PrivacyFix Plug-in Results on Google Plus Author. February 2013.

Bibi, S., D. Katsaros and P. Bozanis. “Business Application Acquisition.” IEEE Software (2012): 86-93.

Borgmann, M., et al. “The Security of Cloud Storage Services.” Technical. Fraunhofer Institute for Secure Information Technology, 2012.

Dolly Drive. “Cloud backup for Mac.” Dolly Drive. February 2013 <;.

DollyDrive. Pricing & Plans. February 2013. February 2013 <;.

DropBox. “Dropbox – Tour.” Dropbox. February 2013 <;.

—. “Plans – Simplify your life.” DropBox. February 2013 <;.

Google. “Google Apps Terms of Service.” Google Apps. Google. Feburary 2013 <;.

McCallum, J. Disk Drive Prices. February 2012. February 2013 <;.

N., Jentzsch., S. Preibusch and A. Harasser. Study on monetising privacy, An economic modelf for pricing personal information. Technical. European Netowrk and Information Security Agency. Berlin: ENISA, 2012.

Seagate. “Hard Drive Data Sheet.” December 2012. February 2013 <;.

Streams, K. Global Internet Speeds creep back to 2012. August 2012. February 2013 <;.

XRates. Historical Lookup Euro Rates Table. 27 February 2012. 18 February 2013 <;.


[1] According to the Internet archive  DropBox Pricing 2011-2013.

[2] A profoundly accurate prediction by Intel co-founder Gordon Moore once stated that the number of transistors on a processor would double every two years.

[3] According to the Internet archive  DropBox Pricing 2011-2013.

[1] Assuming a typical uplink data transfer rate of 7 Mb/s (Streams) as compared to SATA hard disk transfer rates is excess of 6 GB/s (Seagate).

[2] Based on 2012 prices of SATA II hard disk price:  $.07/GB as compared to Cloud-based Storage solution priced at  $.17/GB/Yr.


Introducing Ransomware

27 03 2013

I am guilty of not regularly following malware scam security threats, it seems most can be easily prevented and are typically trigged by user actions.   However,   a new variant has recently surfaced that is interesting as it leverages both technical and emotional measures to exploit money.     That variant is Ransomware which display’s a message that a user’s PC is locked due to a crime they committed and payment must be issued prior to resuming use of the PC.

Symantec has created a detailed white paper on this new threat which estimates yearly revenue from the ransomware in excess of $5 million dollar. And even more suppressing approximately 3 out of every 100 users receiving the message pay the fine.     That begs the question, how are so many people fooled by a virus and willing giving pay the malware creator?

To start with, let’s look at examples of messages shown to potential victims:




While the success of the first message of seems very unlikely based on the overall structure of the page, wording and how to pay.    However, the second message contains mock FBI branding,   web cam picture snap shot,   and a fairly open description of the crime.    While some details of the message are overly specific and highly offensive to the mass population, a final comment adds files accessed with/without knowledge due to a virus   This is likely how the malware developers have been successful, creating a feeling of guilt in the user for downloading an illegal pirated song or the possibility a virus has downloaded something worse.

In addition to use in the private sector,   is it really impossible for a government to employee malware based tools for a minor infraction such as a pirated song or downloading a movie still in theaters?     Looking into this further, Kaspersky has published a detail report of governments creating malware for espionage against other governments and organizations.  In addition, there are confirmed reports of governments such as Germany using malware to spy on their own citizens.

It seems both ransomware and government adoption of malware are clear emerging patterns, the question becomes will these to paths intersect?  Will ransomware be the next product governments look to adopt, creating the real possibility of automated enforcement of FBI cybercrimes?









Why IPv6 SEND will fail

26 03 2013

Before going into the security extension of the IPv6 Network Discovery Protocol (RFC 4861) called SEcure Network Discovery Protocol (RFC 3971) and why I think it will fail as a standard, I’d like to lay some groundwork by briefly touching on IPv6 adoption barriers, explaining the classic “Security, Ease-of-Use, Functionality Triangle” (Cyber Safety 1-5) and the default IPv6 behavior for network discovery and address assignment.   We will then look at how SEND attempts to secure the network discovery process.  I hope that by the end of the work the reader will see that while security is always an ideal, it isn’t always practical in its implementation and systems must have a degree of practicality on the Internet if they are to be adopted.

It is my observation from over 15 years’ experience in IT, a BS BA in MIS, MS in MIS, CISSP, and CCNA that IPv6 as a standard is very slow in its adoption due to a working infrastructure independent of IPv6, its highly disruptive nature, a lack of supporting equipment, and a general ignorance of the technology.  The only way for IPv6 to obtain global adoption is by force of emptying the IPv4 address pool.  Marketing tools such as “World IPv6 Day” are efforts to migrate networks before depletion of the IPv4 pool and subsequent service availability to those without an IPv4 address and vice versa.

The “Security, Ease-of-Use, Functionality Triangle” is one way of illustrating the fact that security, if not contradictory, works against an intuitive interface and the amount of a product’s functionality.  This idea is supported in the book Geekonomics.  In this book the author states that security is often left aside because functionality sells products.  Increased functionality increases complexity which drives up the costs to making those features secure (Rice).  Making a product more secure also makes it harder to use.  One must look no further than the standard computer login screen.  Many of us boot straight into our OS for our personal computers while at work we must certainly login first.  We boot straight into the OS because it is easier. As to make us not look like extreme lackeys, perhaps we do so because we rely on a layered security approach of our doors and windows being locked with alarm systems so we feel we don’t need login prompts.


We can see in the figure that the product is represented by the dot within the triangle.  As the dot moves towards Ease of Use, it moves away from Functionality and Security.  Think Linux versus Windows here.  Linux has functionality well beyond Windows but can be much harder to use.  OS manufacturers try to bridge the gap with a GUI and command prompt interfaces.  So to build a completely secure system, it must be limited to only those functions required and those functions guarded so as to function only as intended, when intended.


Keep this idea of the triangle with you while we transition to IPv6 Network Discovery and SEcure Network Discovery as we will circle back to it at the end.

One of the perks of IPv6 is a client’s ability to obtain an address without the use of a DHCP server.  It isn’t much of a perk since DHCP is already standard in most environments.  In fact, because DHCP is already standard, IPv6 Network Discovery with Stateless Address Auto configuration (SLAAC) can become disruptive at implementation.

Here is how SLAAC works:

An IPv6-enabled router is configured by default to answer to the multicast address FF02::2.  Anytime an IPv6-enabled devices needs to discover what network they are on, it will send a multicast Router Solicitation Packet to FF02::2.   The local gateway will respond with a multicast Router Advertisement packet from the router’s interface MAC address to the multicast address FE02::1.  FE02::1 is a standard all IPv6 speakers multicast group. The Router Advertisement packet includes the network prefix and default gateway.  From this the host can derive a link local IP address, globally unique IP address and a randomized globally unique IP address.  All three are based on its local network block.  The link local and non-random globally unique IP addresses are created from the extended unique identifier-64 (EUI-64) process.  In short, EUI-64 has the host flip the 7th bit of the 48-bit host MAC address (if it’s a one, it becomes a zero and vice versa) and adds the 16-bit hexadecimal value FFFE in the middle of the address to round out the 64-bit requirement of the 128-bit IPv6 address (Barker).



Once this is completed, the host sends a Neighbor Solicitation packet to the derived IPv6 address.  If it receives no response, it knows it is a unique address and the host assigns it to its interface.  This process is called duplicate address detection (DAD).  It runs DAD for each of its IPv6 addresses (link local, globally unique and random globally unique) and its completion also closes the SLAAC process (Barker).

4The SLAAC behavior is supported by industry standard manufacturers making it easier to implement when transitioning to IPv6.  It provides full network functionality using an automated process.  On the triangle, one could reasonably argue that SLAAC is on the bottom center as there are little safeguards from rouge gateways providing false network or gateway addresses.

SEND attempts to remedy some of the insecurities through the reliance on a public key infrastructure.  The chart below provides a good brief of known insecurities in IPv6 Network Discovery Protocol and SEND’s remedies.

The chart makes it a bit plainer to see how SEND uses crypto signatures to guard against a number of attacks.  Again, SEND is an extension to NDP, not a replacement.  The same router and neighbor solicitations and advertisements are used. They are just digitally signed. The problems with this tact are:


  • It is difficult to implement as it requires an established Certificate Authority (CA) on the network.
  • The CA’s root certificate must first be trusted by hosts and routers before any Network Solicitation or Advertisement packets are accepted.  This means either pre-staging equipment with the root certificate or undergoing an initial period of insecurity while accepting the initial untrusted advertisements.
  • The IP addresses move from EUI-64 or static, to a Cryptographically Generated Address (CGA). This address is unrecognizable and more difficult to manage.
  • Opens routers to DoS attacks before each signed message must be run through a crypto algorithm before acceptance/rejection.  This taxes the processor to the point where a flood of NDP messages could consume enough resources to impact router functionality.

While SEND does provide additional security against spoofing router and host messages, it does not provide enough functionality, and is difficult to implement and support.  SEND moves too far North on the triangle to be a practical solution.  It is for these reasons I believe that neither Microsoft nor Apple support SEND (Perschke).  This will prove to be the final nail in the coffin.  Without major vendor support, there is nothing to implement at the start.


Narten, T., IBM, E. Nordmark, Sun Microsystems, W. Simpson, Daydreamer, H. Soliman, and Elevate Technologies. “Neighbor Discovery for IP version 6 (IPv6).” RFC 4861. Internet Engineering Task Force (IETF), Sept. 2007. Web. 11 Feb. 2013. <;.

Arkko, J., Ericsson, J. Kempf, DoCoMo Communication Labs USA, B. Zill, Microsoft, P. Nikander, and Ericsson. “SEcure Neighbor Discovery (SEND).” RFC 3971. Internet Engineering Task Force (IETF), Mar. 2005. Web. 11 Feb. 2013. <;.

International Council of E-Commerce Consultants. Cyber Safety. Clifton Park, NY: EC-Council Press, 2010. Print.

Rice, David. “Six Billion Crash Test Dummies: Irrational Innovation and Perverse Incentives.” Geekonomics: The Real Cost of Insecure Software. Upper Saddle River, NJ: Addison-Wesley, 2008. Print.

Barker, Keith. “IPv6-04 IPv6 Stateless Address Autoconfiguration (SLAAC).” YouTube. 25 Aug. 2011. Web. 10 Feb. 2013.

Lakshmi. “ – Secure Neighbor Discovery (SEND).” – The Source for IPv6 Information, Training, Consulting & Hardware. N.p., 2008. Web. 11 Feb. 2013. <;.

Perschke, Susan. “Hackers target IPv6.” Network World – Network World. N.p., 28 Nov. 2011. Web. 11 Feb. 2013. <;.

Stretch, Jeremy. “IPv6 neighbor discovery.” Packet Life. N.p., 28 Aug. 2008. Web. 11 Feb. 2013. <;.

Information Security and the Sarbanes-Oxley Act

25 03 2013

After taking my time to search for a valuable topic towards contribution to the blog, I remembered a financial accounting class that I had taken during the earlier part of my MSIT program. The course had small component on discussing the Sarbanes Oxley Act (SOX) and discussed financial controls and risk for publicly traded companies. It also briefly touched upon the impact of SOX on IT and how companies needed to transform their systems controls and reporting capabilities to stay compliant. For the purposes of this blog I decided to examine the subject in a little more detail by examining its requirements, related security frameworks and results from a study of organizations that implemented SOX. I will attempt to do this by answering some key questions.

What is SOX and how did it come into being?

SOX is a legislation reform introduced in 2002 to improve accuracy and integrity/reliability of the various financial statements of a publicly traded company. Its primary purpose is to ensure that the appropriate controls within an organization are implemented so that the creation and documentation of the information provided in the financial reports are governed according to a standard ( This purpose serves various objectives, not the least of which is to build confidence among the company’s investors, encourage independence between auditors and clients and assign more accountability and ownership to the company’s management (CFOs and CEOs) in relation to the disclosed financial information. The two sections that often quoted in IT security related discussions related to SOX are SECTION 404 (CORPORATE RESPONSIBILITY FOR FINANCIAL REPORTS) and SECTION 302 (MANAGEMENT ASSESSMENT OF INTERNAL CONTROLS) ( – S302/S404.htm)

The reason why SOX came into being is due to a wave of accounting/audit malpractices and frauds by the executive of large corporations and their auditors such as Enron, Worldcom, Arthur Anderson (Wikipedia) that resulted in losses of hundreds of millions of dollars in investments as they collapsed during the turn of the millennium. In order to prevent this from happening, SOX was introduced and brought itself an a set of requirements that required an organizations information security landscape to change significantly if it was to stay in business.

Why did IT need to change?

The SOX act brought with it various bi-products such as regulatory authorities and governance frameworks. Among these were the Public Company Accounting Oversight Board (PCAOB). Its purpose was to provide guidance to the auditing firms that were assessing the compliance of the company with SOX through auditing standards. Among these standards was a clause that discussed the management of internal controls. It stated:

“Determining which controls should be tested, including overall relevant assertions related to all significant accounts and disclosures in the financial statements. Generally, such controls include: Controls, including information technology general controls on which other controls are dependent”(Stults, 2004, p4)

How was information Security impacted?

COBIT (Control Objects for Information and Related Technology) was a framework that was introduced to provide details on creation and assessment of various IT controls was introduced to allow information security teams to essentially implement the various requirements of SOX inside an organization. COBIT had detailed guidance on various IT related processes that were categorized into various domains including Planning & organization, Acquisition and Implementation, Delivery & Support and Monitoring (Stults, 2004, p6)

An organization created to help companies with their IT governance known as ITGI (Information Technology Governance Institute) used COBIT and COSO (another SOX related controls framework) and published guidelines on various information security topics. Amongst the key areas it provided details on Security Policy, Security Standards, Access and Authentication, Network Security, Monitoring, Segregation of Duties and Physical Security: (Stults, 2004, p7)

A lot of what forms the basis of a company’s security processes and infrastructure was being defined and enforced legally through the adoption of these frameworks. Organizations would have undergone information security transformation projects at all the above levels to become and retain compliant status.

A study by Dr. Janine L. Spears (DePaul University)  analyzes the impact of SOX on information security. The study was carried out around 2009 – seven years after the introduction of SOX. It is interesting to note the conclusions that were drawn from the study (Spears, 2009, p.1-4)

  1. Increased business collaboration and awareness of managing security risks within the organizations
  2. Greater maturity of security Risk management processes
  3. Increase in effectiveness of access control application
  4. Greater investments in information security to maintain compliance
  5. Building security programs around compliance requirements
  6. Improved overall information security in the organization (Spears, 2009, p.1-4)

Although all the above conclusions of the study indicate a positive sign and suggest improved impact on information security, the 5th point above is a little discourages. The discussion in the study points to the fact that organizations are limiting their security initiatives to the extent of only implementing controls that are required by SOX. They are not evolving to improve beyond that level which is cause for concern as these organization feel that it is not necessary.

What to conclude?

I think one needs to be able to appreciate the fact that the last 10 years have seen a greater awareness of information security as a whole in large organizations. Having said that, a lot of this can be attributed to the SOX legislation and the indirect effect of its requirements for enforcements of the controls and auditability of the information stored in a modern IT enabled business.


Wikipedia – Accounting Scandals :

SOX 302:

SOX 404 :


Greg Stults, May 9, 2004 , SANS Institute:An overview of Sarbanes-Oxley for the Information Security -(

ProfessionalDr. Janine L.Spears , ISACA Journal Volume 6, 2009 How has Sarbanes-Oxley compliance Affected Information Security  -(

The Increasing Threat to Industrial Control Systems/Supervisory Control and Data Acquisition Systems

23 03 2013

This blog has previously discussed Industrial Control Systems (ICS) and Supervisory Control and Data Acquisition Systems (SCADA) here and again here in November 2012.  Recently, ICS-CERT has released several bulletins that have spelled out trends and numbers showing an increase in the threats to ICS.

How much is the threat increasing?

ICS-CERT noted that in Fiscal Year (FY) 2012 (10/1/2011-9/30/2012) they “responded to 198 cyber incidents reported by asset owners and industry partners” and “tracked 171 unique vulnerabilities affecting ICS products”(ICS-CERT Operational).  This is an approximately five-fold increase over the number of incidents reported in FY2010 (41) (ICS-CERT Incident).

Why is the threat increasing?

While some of this sharp increase may be attributable to ICS-CERT beginning operations in FY2009 (ICS-CERT Incident) and and associated delay in the industry being made aware of this resource, it is likely that there have been an increasing number of ICS cyber incidents for the following reasons:

1)  “Many researchers” have “begun viewing the control systems arena as an untapped area of focus for vulnerabilities and exploits” and are using “their research to call attention to it.” (ICS-CERT 2010)

2)  Availability of search engines such as SHODAN that are tailored to assist operators, researchers (and attackers) in identifying internet-accessible control systems (ICS-ALERT-12-046-01A)

3)  Increased interest by hacktivists and hackers in ICS (ICS-ALERT-12-046-01A)

4)  Release of ICS exploits for toolkits such as Metasploit (ICS-ALERT-12-046-01A)

5)  An increased interest by attackers, possibly associated with foreign governments, in obtaining information regarding ICS and ICS software, for example stealing information related to SCADA software (Rashid) or, in the case of Stuxnet, attacking ICS to damage or shut down the controlled hardware (Iran).

Why are ICS networks still so insecure?

Some responsibility for the state of ICS security should be attributed to the primacy of Availability in the minds of ICS operators when evaluating the Confidentiality-Integrity-Availability triad.  This  leads to long periods of time between declared outage windows in operations and thus an extended period of time before new hardware or network security can be put in place.  However, it should be noted that ICS insecurity can lead to or extend outages, such as the recent failure to restart operations on time seen at a power generating facility due to an infection of the control environment by a virus on a thumb drive (Virus).  In this instance, availability of the plant was impacted by a security event that extended the planned outage by approximately three weeks (Virus).

How can ICS operators increase security?

With this in mind, it is imperative that ICS operators begin or continue to treat increased security of ICS IT operations seriously, and factor increasing security into their procurement and redesign plans.  Failure to do so can lead to increased outages or damage to operating equipment (see Stuxnet).  The good news is that there are security practices that can be put in place in the (hopefully) tightly controlled ICS environment that may not work in the comparatively more free-wheeling office network, including application white-listing (ICS-TIP-12-146-01B).  As many ICS vendors recommend against applying routine operating system patches, white-listing may assist in preventing the execution of malicious code introduced into the environment (ICS-TIP-12-146-01B).

Other possible security controls that ICS operators should consider implementing include those suggested by ICS-CERT  (ICS-TIP-12-146-01B):

Network Segmentation – With the increasing frequency of taking formerly air-gapped control networks and connecting them to corporate networks and the internet, it is increasingly important that appropriate security measures be put in place to segment the control network as much as possible from more general-purpose networks (ICS-TIP-12-146-01B)

Role-Based Access Controls – Access based on job role will decrease the likelihood that an employee is given more access than needed by basing their access on their job function and managing this access by job role instead of user by user (ICS-TIP-12-146-01B)

Increased Logging and Auditing – Incident response, remediation, and recovery (including root cause analysis) in the control network requires that detailed logs be kept and available (ICS-TIP-12-146-01B)

Credential Management (including strict permission management) – Where possible, centralized management of credentials should be implemented to ensure that password policy and resets can be performed more easily.  This centralized management will also ensure that superuser/administrator accounts are tracked and can be more easily disabled if needed (ICS-TIP-12-146-01B)

Develop an Ability to Preserve Forensic Data – Much like logging, the ability to preserve forensic data is important to allow for root cause analysis and, if the event is malicious in nature, identification and prosecution of the intruder/malicious actor.  This includes the ability to capture volatile data such as network connectivity or dynamic memory in addition to the more traditional forensics of hard drives. (ICS-TIP-12-146-01B)


“ICS-ALERT-12-046-01A—(UPDATE) Increasing Threat To Industrial Control Systems.” The Industrial Control Systems Cyber Emergency Response Team. Industrial Control Systems Cyber Emergency Response Team., 25 October 2012.  Web.  28 January 2013. < >

“ICS-CERT – Operational Review Fiscal Year 2012.” ICS-CERT Monitor.  Industrial Control Systems Cyber Emergency Response Team., n.d.  Web.  28 January 2013. < >

“ICS-CERT Incident Response Summary Report.” The Industrial Control Systems Cyber Emergency Response Team. Industrial Control Systems Cyber Emergency Response Team., n.d.  Web.  28 January 2013. <  >

“ICS-CERT – 2010 Year In Review.”  The Industrial Control Systems Cyber Emergency Response Team. Industrial Control Systems Cyber Emergency Response Team., January 2011.  Web.  28 January 2013. < >

“ICS-TIP-12-146-01B— (UPDATE) Targeted Cyber Intrusion Detection And Mitigation Strategies.” The Industrial Control Systems Cyber Emergency Response Team. Industrial Control Systems Cyber Emergency Response Team., 22 January 2013.  Web.  28 January 2013. < &gt;

“Iran Confirms Stuxnet Worm Halted Centrifuges.”  CBS News., 29 November 2010. Web. 2 February 2013. < >

“Virus Infection At An Electric Utility.” ICS-CERT Monitor.  Industrial Control Systems Cyber Emergency Response Team., n.d.  Web.  28 January 2013. < >

Rashid, Fahmida Y.  “Telvent Hit by Sophisticated Cyber-Attack, SCADA Admin Tool Compromised.” Security Week.  Wired Business Media., 26 September 2012. Web. 2 February 2013. < >