Wi-Fi’s WPA Hacked… Again

31 03 2013


Since its implementation, Wi-Fi has had a troubled time establishing a reliable encryption standard despite its exponential growth in popularity among businesses and casual users alike. After the epic failure of the Wired Equivalent Privacy (WEP) algorithm in 2001 due to weak and predictable encryption methods, a new encryption standard was needed to pick up where WEP had failed (Borisov, Goldberg and Wagner). The Wi-Fi Alliance’s Wi-Fi Protected Access (WPA) and the Institute of Electrical and Electronics Engineers’ (IEEE) WPA2 standard, which provided stronger encryption and mutual authentication, was supposed to be the answer to all of our Wi-Fi woes (Wi-Fi Alliance 2). It has done a decent job; at least until the Wi-Fi Protected Setup (WPS) feature was introduced. This is a great example of how tipping the scale in favor of convenience rather than security didn’t work out so well.

A Brief Background on WPA/2

For the scope of this discussion, I will only be addressing the personal pre-shared key (PSK) flavor of WPA. While WPA and WPA2 are indeed much more robust security mechanisms than their predecessor, WEP, they do have problems of their own. Both implementations of WPA use a 4-way handshake for key exchange and authentication. WPA utilizes a constantly changing temporary session key known as a Pairwise Transient Key (PTK) derived from the original passphrase in order to deter cryptanalysis and replay attacks. During this process the user-selected PSK is input into a formula along with the Service Set Identifier (SSID) of the given network, and the SSID length and then hashed 4096 times to derive a 256-bit Pairwise Master Key (PMK). Another function is performed on the PMK using two nonce values and the two Media Access Control (MAC) addresses from the access point and the client which in turn generates a PTK on both devices (Moskowitz). These PTKs are then used to generate encryption keys to encrypt further communications (Wi-Fi Alliance). The problem is that this 4-way handshake can unfortunately be observed by a third party. If an outside device captures the handshake, then the two MAC addresses, nonce values, and the cipher suite used can be obtained. The PTK can then be generated by the outsider (Moskowitz). A dictionary or brute force attack can then be run against the PTK to find the corresponding original PSK it was derived from. Therefore choosing a weak password significantly reduces the effectiveness of WPA and greatly increases the chances that your PSK will be discovered.

Then Came WPS

In 2007 the Wi-Fi Alliance decided to make connecting to WPA enabled networks easier for home users and developed the WPS specification. Their goal was to promote best practices for security while providing ease of use (Wi-Fi Alliance 1) for home users. Essentially they accomplished this by creating a backdoor into your WPA enabled network.

WPS comes in two modes of operation, a push-button-connect mode and a personal identification number (PIN) mode. Furthermore the PIN mode is split into two subcategories, an internal registrar and external registrar mode (Viehböck 3-4). While the push-button mode has security implications of its own, we are going to focus on the external registrar PIN mode of operation.

This is Where Things Get Interesting

The external registrar PIN mode of operation only requires that a foreign wireless device send an 8 digit PIN that matches the 8 digit PIN set on the WPS-enabled access point or external registrar used to authenticate WPS clients. If the PIN that was sent matches, the access point or registrar responds with the PSK needed to authenticate to the network. Thus, the security of a WPA2 enabled network even with a strong 60 character passphrase could potentially be compromised by exploiting an 8 digit PIN. To add insult to injury, the 8 digit PIN is actually 7 digits, with the eighth digit being a checksum of the previous 7. The 8 digits are then split in half during transmission, with digits 1-4 being the first half of the PIN and 5-8 being the second half. During PIN authentication each half of the PIN is sent and authenticated separately. Based on the response given by the access point or registrar for a submitted PIN, an attacker can determine if the first and second halves were correct or incorrect independently of each other. At this point, to gain unauthorized access to the network, you essentially just need to brute force two 4-digit PINs or 104 + 104. That’s only 20,000 possible combinations. Additionally, since the eighth digit of the PIN is a checksum, you really only have a maximum of 104 + 103, or 11,000 possible values to brute force (Viehböck 4-6). Keep in mind that this has nothing to do with the strength of your actual WPA passphrase. The most disturbing implications of this are that an otherwise well-secured, unfeasibly penetrable WPA-PSK network could still be easily compromised by guessing 1 of 11,000 possible values.

What Devices are Affected by This?

This attack was published in late 2011 and unfortunately the vast majority of small office/home office (SOHO) wireless routers in use remain vulnerable. Additionally, most of the wireless routers and access points on the market have this WPS feature enabled by default and with certain vendors the user isn’t even given the option to disable it! Wireless router vendors have been notified of this vulnerability and some vendors have already released firmware updates disabling the WPS PIN feature by default and in some cases giving the user the option to disable it (Viehböck 9). The problem is that the average home user will probably not routinely update their router firmware and may remain vulnerable indefinitely. A recent scan using Wash, a tool which is used to identify WPA networks which are vulnerable to this attack, revealed 14 vulnerable SSIDs within close proximity to my home. There is also a spreadsheet of known vulnerable devices hosted on Google Docs (WPS Flaw Vulnerable Devices).

How to Protect Yourself

Update your router or access point to the latest firmware available and completely disable the WPS feature. If your device will not let you disable WPS, contact your vendor or consider purchasing a device that will let you. Also, it couldn’t hurt to run the Wash tool and see if your network is listed as being vulnerable. If you want to take it one step further, the Reaver tool will enable you to run the WPS PIN attack against your own network to determine if you are indeed susceptible to this vulnerability.


Borisov, Nikita, Ian Goldberg and David Wagner. “Security of the WEP Algorithm.” n.d. (In)Security of the WEP algorithm. 16 February 2013.

Moskowitz, Robert. Weakness in Passphrase Choice in WPA Interface. 4 November 2003. 17 February 2013. <http://wifinetnews.com/archives/2003/11/weakness_in_passphrase_choice_in_wpa_interface.html&gt;.

Viehböck, Stefan. Brute forcing Wi-Fi Protected Setup. 26 December 2011. Document.

Wi-Fi Alliance. “State of Wi-Fi Security.” January 2012. Wi-Fi Alliance. Document. 16 February 2013. <http://www.wi-fi.org/sites/default/files/uploads/20120229%20State%20of%20Wi-Fi%20Security_09May2012_updated_cert.pdf&gt;.

—. Wi-Fi Certified Wi-Fi Protected Setup. December 2010. Document.

“WPS Flaw Vulnerable Devices.” n.d. Document. 17 February 2013. <https://docs.google.com/spreadsheet/ccc?key=0Ags-JmeLMFP2dFp2dkhJZGIxTTFkdFpEUDNSSHZEN3c&gt;.

Why IPv6 SEND will fail

26 03 2013

Before going into the security extension of the IPv6 Network Discovery Protocol (RFC 4861) called SEcure Network Discovery Protocol (RFC 3971) and why I think it will fail as a standard, I’d like to lay some groundwork by briefly touching on IPv6 adoption barriers, explaining the classic “Security, Ease-of-Use, Functionality Triangle” (Cyber Safety 1-5) and the default IPv6 behavior for network discovery and address assignment.   We will then look at how SEND attempts to secure the network discovery process.  I hope that by the end of the work the reader will see that while security is always an ideal, it isn’t always practical in its implementation and systems must have a degree of practicality on the Internet if they are to be adopted.

It is my observation from over 15 years’ experience in IT, a BS BA in MIS, MS in MIS, CISSP, and CCNA that IPv6 as a standard is very slow in its adoption due to a working infrastructure independent of IPv6, its highly disruptive nature, a lack of supporting equipment, and a general ignorance of the technology.  The only way for IPv6 to obtain global adoption is by force of emptying the IPv4 address pool.  Marketing tools such as “World IPv6 Day” are efforts to migrate networks before depletion of the IPv4 pool and subsequent service availability to those without an IPv4 address and vice versa.

The “Security, Ease-of-Use, Functionality Triangle” is one way of illustrating the fact that security, if not contradictory, works against an intuitive interface and the amount of a product’s functionality.  This idea is supported in the book Geekonomics.  In this book the author states that security is often left aside because functionality sells products.  Increased functionality increases complexity which drives up the costs to making those features secure (Rice).  Making a product more secure also makes it harder to use.  One must look no further than the standard computer login screen.  Many of us boot straight into our OS for our personal computers while at work we must certainly login first.  We boot straight into the OS because it is easier. As to make us not look like extreme lackeys, perhaps we do so because we rely on a layered security approach of our doors and windows being locked with alarm systems so we feel we don’t need login prompts.


We can see in the figure that the product is represented by the dot within the triangle.  As the dot moves towards Ease of Use, it moves away from Functionality and Security.  Think Linux versus Windows here.  Linux has functionality well beyond Windows but can be much harder to use.  OS manufacturers try to bridge the gap with a GUI and command prompt interfaces.  So to build a completely secure system, it must be limited to only those functions required and those functions guarded so as to function only as intended, when intended.


Keep this idea of the triangle with you while we transition to IPv6 Network Discovery and SEcure Network Discovery as we will circle back to it at the end.

One of the perks of IPv6 is a client’s ability to obtain an address without the use of a DHCP server.  It isn’t much of a perk since DHCP is already standard in most environments.  In fact, because DHCP is already standard, IPv6 Network Discovery with Stateless Address Auto configuration (SLAAC) can become disruptive at implementation.

Here is how SLAAC works:

An IPv6-enabled router is configured by default to answer to the multicast address FF02::2.  Anytime an IPv6-enabled devices needs to discover what network they are on, it will send a multicast Router Solicitation Packet to FF02::2.   The local gateway will respond with a multicast Router Advertisement packet from the router’s interface MAC address to the multicast address FE02::1.  FE02::1 is a standard all IPv6 speakers multicast group. The Router Advertisement packet includes the network prefix and default gateway.  From this the host can derive a link local IP address, globally unique IP address and a randomized globally unique IP address.  All three are based on its local network block.  The link local and non-random globally unique IP addresses are created from the extended unique identifier-64 (EUI-64) process.  In short, EUI-64 has the host flip the 7th bit of the 48-bit host MAC address (if it’s a one, it becomes a zero and vice versa) and adds the 16-bit hexadecimal value FFFE in the middle of the address to round out the 64-bit requirement of the 128-bit IPv6 address (Barker).



Once this is completed, the host sends a Neighbor Solicitation packet to the derived IPv6 address.  If it receives no response, it knows it is a unique address and the host assigns it to its interface.  This process is called duplicate address detection (DAD).  It runs DAD for each of its IPv6 addresses (link local, globally unique and random globally unique) and its completion also closes the SLAAC process (Barker).

4The SLAAC behavior is supported by industry standard manufacturers making it easier to implement when transitioning to IPv6.  It provides full network functionality using an automated process.  On the triangle, one could reasonably argue that SLAAC is on the bottom center as there are little safeguards from rouge gateways providing false network or gateway addresses.

SEND attempts to remedy some of the insecurities through the reliance on a public key infrastructure.  The chart below provides a good brief of known insecurities in IPv6 Network Discovery Protocol and SEND’s remedies.

The chart makes it a bit plainer to see how SEND uses crypto signatures to guard against a number of attacks.  Again, SEND is an extension to NDP, not a replacement.  The same router and neighbor solicitations and advertisements are used. They are just digitally signed. The problems with this tact are:


  • It is difficult to implement as it requires an established Certificate Authority (CA) on the network.
  • The CA’s root certificate must first be trusted by hosts and routers before any Network Solicitation or Advertisement packets are accepted.  This means either pre-staging equipment with the root certificate or undergoing an initial period of insecurity while accepting the initial untrusted advertisements.
  • The IP addresses move from EUI-64 or static, to a Cryptographically Generated Address (CGA). This address is unrecognizable and more difficult to manage.
  • Opens routers to DoS attacks before each signed message must be run through a crypto algorithm before acceptance/rejection.  This taxes the processor to the point where a flood of NDP messages could consume enough resources to impact router functionality.

While SEND does provide additional security against spoofing router and host messages, it does not provide enough functionality, and is difficult to implement and support.  SEND moves too far North on the triangle to be a practical solution.  It is for these reasons I believe that neither Microsoft nor Apple support SEND (Perschke).  This will prove to be the final nail in the coffin.  Without major vendor support, there is nothing to implement at the start.


Narten, T., IBM, E. Nordmark, Sun Microsystems, W. Simpson, Daydreamer, H. Soliman, and Elevate Technologies. “Neighbor Discovery for IP version 6 (IPv6).” RFC 4861. Internet Engineering Task Force (IETF), Sept. 2007. Web. 11 Feb. 2013. <http://tools.ietf.org/html/rfc4861&gt;.

Arkko, J., Ericsson, J. Kempf, DoCoMo Communication Labs USA, B. Zill, Microsoft, P. Nikander, and Ericsson. “SEcure Neighbor Discovery (SEND).” RFC 3971. Internet Engineering Task Force (IETF), Mar. 2005. Web. 11 Feb. 2013. <http://www.ietf.org/rfc/rfc3971.txt&gt;.

International Council of E-Commerce Consultants. Cyber Safety. Clifton Park, NY: EC-Council Press, 2010. Print.

Rice, David. “Six Billion Crash Test Dummies: Irrational Innovation and Perverse Incentives.” Geekonomics: The Real Cost of Insecure Software. Upper Saddle River, NJ: Addison-Wesley, 2008. Print.

Barker, Keith. “IPv6-04 IPv6 Stateless Address Autoconfiguration (SLAAC).” YouTube. 25 Aug. 2011. Web. 10 Feb. 2013.

Lakshmi. “IPv6.com – Secure Neighbor Discovery (SEND).” IPv6.com – The Source for IPv6 Information, Training, Consulting & Hardware. N.p., 2008. Web. 11 Feb. 2013. <http://ipv6.com/articles/research/Secure-Neighbor-Discovery.htm&gt;.

Perschke, Susan. “Hackers target IPv6.” Network World – Network World. N.p., 28 Nov. 2011. Web. 11 Feb. 2013. <http://www.networkworld.com/news/2011/112811-hackers-ipv6-253408.html&gt;.

Stretch, Jeremy. “IPv6 neighbor discovery.” Packet Life. N.p., 28 Aug. 2008. Web. 11 Feb. 2013. <http://packetlife.net/blog/2008/aug/28/ipv6-neighbor-discovery/&gt;.

The Increasing Threat to Industrial Control Systems/Supervisory Control and Data Acquisition Systems

23 03 2013

This blog has previously discussed Industrial Control Systems (ICS) and Supervisory Control and Data Acquisition Systems (SCADA) here and again here in November 2012.  Recently, ICS-CERT has released several bulletins that have spelled out trends and numbers showing an increase in the threats to ICS.

How much is the threat increasing?

ICS-CERT noted that in Fiscal Year (FY) 2012 (10/1/2011-9/30/2012) they “responded to 198 cyber incidents reported by asset owners and industry partners” and “tracked 171 unique vulnerabilities affecting ICS products”(ICS-CERT Operational).  This is an approximately five-fold increase over the number of incidents reported in FY2010 (41) (ICS-CERT Incident).

Why is the threat increasing?

While some of this sharp increase may be attributable to ICS-CERT beginning operations in FY2009 (ICS-CERT Incident) and and associated delay in the industry being made aware of this resource, it is likely that there have been an increasing number of ICS cyber incidents for the following reasons:

1)  “Many researchers” have “begun viewing the control systems arena as an untapped area of focus for vulnerabilities and exploits” and are using “their research to call attention to it.” (ICS-CERT 2010)

2)  Availability of search engines such as SHODAN that are tailored to assist operators, researchers (and attackers) in identifying internet-accessible control systems (ICS-ALERT-12-046-01A)

3)  Increased interest by hacktivists and hackers in ICS (ICS-ALERT-12-046-01A)

4)  Release of ICS exploits for toolkits such as Metasploit (ICS-ALERT-12-046-01A)

5)  An increased interest by attackers, possibly associated with foreign governments, in obtaining information regarding ICS and ICS software, for example stealing information related to SCADA software (Rashid) or, in the case of Stuxnet, attacking ICS to damage or shut down the controlled hardware (Iran).

Why are ICS networks still so insecure?

Some responsibility for the state of ICS security should be attributed to the primacy of Availability in the minds of ICS operators when evaluating the Confidentiality-Integrity-Availability triad.  This  leads to long periods of time between declared outage windows in operations and thus an extended period of time before new hardware or network security can be put in place.  However, it should be noted that ICS insecurity can lead to or extend outages, such as the recent failure to restart operations on time seen at a power generating facility due to an infection of the control environment by a virus on a thumb drive (Virus).  In this instance, availability of the plant was impacted by a security event that extended the planned outage by approximately three weeks (Virus).

How can ICS operators increase security?

With this in mind, it is imperative that ICS operators begin or continue to treat increased security of ICS IT operations seriously, and factor increasing security into their procurement and redesign plans.  Failure to do so can lead to increased outages or damage to operating equipment (see Stuxnet).  The good news is that there are security practices that can be put in place in the (hopefully) tightly controlled ICS environment that may not work in the comparatively more free-wheeling office network, including application white-listing (ICS-TIP-12-146-01B).  As many ICS vendors recommend against applying routine operating system patches, white-listing may assist in preventing the execution of malicious code introduced into the environment (ICS-TIP-12-146-01B).

Other possible security controls that ICS operators should consider implementing include those suggested by ICS-CERT  (ICS-TIP-12-146-01B):

Network Segmentation – With the increasing frequency of taking formerly air-gapped control networks and connecting them to corporate networks and the internet, it is increasingly important that appropriate security measures be put in place to segment the control network as much as possible from more general-purpose networks (ICS-TIP-12-146-01B)

Role-Based Access Controls – Access based on job role will decrease the likelihood that an employee is given more access than needed by basing their access on their job function and managing this access by job role instead of user by user (ICS-TIP-12-146-01B)

Increased Logging and Auditing – Incident response, remediation, and recovery (including root cause analysis) in the control network requires that detailed logs be kept and available (ICS-TIP-12-146-01B)

Credential Management (including strict permission management) – Where possible, centralized management of credentials should be implemented to ensure that password policy and resets can be performed more easily.  This centralized management will also ensure that superuser/administrator accounts are tracked and can be more easily disabled if needed (ICS-TIP-12-146-01B)

Develop an Ability to Preserve Forensic Data – Much like logging, the ability to preserve forensic data is important to allow for root cause analysis and, if the event is malicious in nature, identification and prosecution of the intruder/malicious actor.  This includes the ability to capture volatile data such as network connectivity or dynamic memory in addition to the more traditional forensics of hard drives. (ICS-TIP-12-146-01B)


“ICS-ALERT-12-046-01A—(UPDATE) Increasing Threat To Industrial Control Systems.” The Industrial Control Systems Cyber Emergency Response Team. Industrial Control Systems Cyber Emergency Response Team., 25 October 2012.  Web.  28 January 2013. < http://www.us-cert.gov/control_systems/pdf/ICS-ALERT-12-046-01A.pdf >

“ICS-CERT – Operational Review Fiscal Year 2012.” ICS-CERT Monitor.  Industrial Control Systems Cyber Emergency Response Team., n.d.  Web.  28 January 2013. <  http://www.us-cert.gov/control_systems/pdf/ICS-CERT_Monthly_Monitor_Oct-Dec2012.pdf >

“ICS-CERT Incident Response Summary Report.” The Industrial Control Systems Cyber Emergency Response Team. Industrial Control Systems Cyber Emergency Response Team., n.d.  Web.  28 January 2013. < http://www.us-cert.gov/control_systems/pdf/ICS-CERT_Incident_Response_Summary_Report_09_11.pdf  >

“ICS-CERT – 2010 Year In Review.”  The Industrial Control Systems Cyber Emergency Response Team. Industrial Control Systems Cyber Emergency Response Team., January 2011.  Web.  28 January 2013. < http://www.us-cert.gov/control_systems/pdf/ICS-CERT_2010_yir.pdf >

“ICS-TIP-12-146-01B— (UPDATE) Targeted Cyber Intrusion Detection And Mitigation Strategies.” The Industrial Control Systems Cyber Emergency Response Team. Industrial Control Systems Cyber Emergency Response Team., 22 January 2013.  Web.  28 January 2013. <http://www.us-cert.gov/control_systems/pdf/ICS-TIP-12-146-01B.pdf &gt;

“Iran Confirms Stuxnet Worm Halted Centrifuges.” CBSNews.com.  CBS News., 29 November 2010. Web. 2 February 2013. < http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml >

“Virus Infection At An Electric Utility.” ICS-CERT Monitor.  Industrial Control Systems Cyber Emergency Response Team., n.d.  Web.  28 January 2013. <  http://www.us-cert.gov/control_systems/pdf/ICS-CERT_Monthly_Monitor_Oct-Dec2012.pdf >

Rashid, Fahmida Y.  “Telvent Hit by Sophisticated Cyber-Attack, SCADA Admin Tool Compromised.” Security Week.  Wired Business Media., 26 September 2012. Web. 2 February 2013. < http://www.securityweek.com/telvent-hit-sophisticated-cyber-attack-scada-admin-tool-compromised >


5 12 2012

The IPv6 (Internet Protocol version 6) was developed in order to address the impending shortage of address space that was a serious limiting factor to the continued usage of IPv4. The Internet Engineering Task Force (IETF) initiated it as early as 1994 (1).

The worldwide deployment of IPv6 traced back to July 1999, on which the major industry players and corporations around the world including manufactures, Research & Development institutions, Education organizations, Telecom Operators, Consulting companies and many others joined together in a nonprofit organization named “IPv6 Forum”(2). From that day on, the process of global deployment of IPv6 has been speeding up significantly. By now, IPv6 can be viewed as 21st century Internet to some extent. The current status of deployment of IPv6 around world is very promising: USA has issued a mandate to all vendors to switch to an IPv6 platform by summer of 2008. A consulting and R&D firm in Canada has also developed a tunnel server which allows any IPv4 node to be connected to the 6Bone.

However, considering the fact that only 2 commercial IPv6 address ranges have been allocated in North America, this indicates that the operational deployment of IPv6 in North America may progress slow since the problems of IPv4 shortage was not that urgent in those area yet. On the other hand, both Asian and European have strong support for the deployment of IPv6. China initiated a five year plan (China’s Next Generation Internet) with the objective of implementing IPv6 early, and put on display the new IPv6 deployment during Olympics in Beijing. The mobility industry in European is also a strong supporter of the transition to IPv6, and the European Telecommunications Standards Institute and the IPv6 Forum have also established a cooperation agreement. There are new actions about deployment of IPv6 everyday around the rest of the world as well. (3)

IPv6 has serious of new security features compared to IPv4. The first thing needs to be mentioned here is that IP security (IPsec) is part of IPv6 protocol suite, and it is mandatory. (4) IPsec is a set of Internet standards that uses cryptographic security services to providing confidentiality, authentication and Data Integrity. Although IPv4 also adapted IPsec as optional property, data is secured from the originating host to the destination host via various routers in IPv6 whereas only between border routers of separate networks in IPv4. (5) IPsec has a fundamental concept named Security Association (SA). SA is uniquely identified by the Security Parameters Index, destination IP address and security protocol. It is a one-way relationship between sender and receiver that defines the type of security services for a connection. IPv6 also has an authentication header (AH)which provides data integrity, anti-replay protection and data authentication for the entire IPv6 packet. In addition, the Encapsulating Security Payload (ESP) Header provides confidentiality, authentication and data integrity to the encapsulated payload. (4)

The new security features provided by IPv6 are significant improvements over IPv4 along with other new features of IPv6. However, it has created several new security issues.  Firstly, the strength of the encryption algorithms to be used to ensure global interoperability is limited due to export laws. Secondly,             public-key infrastructure (PKI) has not been fully standardized, which can be a problem to IPsec since it relies on PKI. Furthermore, there still exits flaws in against Denial of Service and flooding attacks. And there’s also the potential for inadvertent confusion among routers with the ability to change IP addresses, the generated traffic may look like a DDos attack to an IPv4 firewall. Besides, misconfiguring IPv6 systems is still big threat to organizations. (6)

As a CIO of CMU, the first and most important thing when consider implementing IPv6 on CMU campus is that we must not compromise the security of the site. Many of common threats and attacks on IPv4 also apply to IPv6, and on the other hand, many new threat possibilities do not appear in the same way as with IPv4. To begin with, I will mark the reconnaissance more difficult via proper address planning in order to prevent attackers from quickly understand the common addressing for the campus. I will also plan the control of management access to the campus switches carefully, implement IPv6 traffic policy and Control Plane Policing by Controlling IPv6 traffic based on source prefix that can help protect the network against basic spoofing (6).  However, despite the drawbacks and new security issues mentioned above, the benefits of IPv6 outweigh its shortcomings since IPv6 provides auto configuration capabilities, direct addressing, much more address space, built in IPsec and interoperability and mobility capabilities which are already widely embedded in network devices. As a CIO of CMU, I will certainly deploy IPv6.(7)


DNSSEC, stands for DNS Security extensions, was designed to add security to DNS and protect the Internet from certain attacks. It was first addressed by Steven Bellovin in his paper in 1995, the final design standardized in RFC 4033-35 March 2005 by IETF (8).

The following two figures represent the level of DNSSEC deployment in the word to date.  Those countries marked green have deployed DNSSEC today. Those marked yellow have plans to deploy it in the near future.



We can see from the figures above that most countries in European and north America have deployed DNSSEC.(9)

DNSSEC was designed to protect the internet from certain attacks, such as DNS caching poisoning.(10) It is a set of extensions to DNS which provides origin authentication of DNS data, data integrity and authenticated denial of service. It has several new resource record types to add security: Resource Record Signature (RRSIG), DNS Public Key (DNSKEY), Delegation Signer (DS), and Next Secure (NSEC).(10) DNSSEC uses public key cryptography to sign and authenticate DNS resource record sets (RRsets).  Digital signatures are stored in RRSIG resource records and are used in the DNSSEC authentication process. The DS can refer to a DNSKEY by storing the key tag, algorithm number and a digest of the DNSKEY. The NSEC resource record lists two separate things: the next owner name that contains   authoritative data or a delegation point NS RR set, and the set of RR types present at the NSEC RR’s owner name.(11) DNSSEC also has two DNS header flags namely Checking Disabled (CD) and Authenticated data (AD), it also support for the DNSSEC OK (DO) EDNS header bit so that a security-aware resolver can indicate in its queries that it wishes to receive DNSSEC RRs in response messages. DNSSEC protects clients from forged data by digitally signing DNS records. Clients can use this digital signature to check whether or not the supplied DNS information is identical to that held on the authoritative DNS server. It will also be possible to use DNSSEC-enabled DNS to store other digital certificates; this makes it possible to use DNSSEC as public key infrastructure for signing of e-mail. (12)

However, DNSSEC also introduce some new security issues. Firstly, DNSSEC must be able to report when a name is not found, and providing a signed “not found ” record for a name may cause a denial of service while a unsigned record could easily be spoofed. In addition, since DNSSEC will return a pre-signed report containing a range of names which do not exist and could be signed offline ahead of time. This will give attackers much more information about the network.(12)

As a CIO of CMU, here are few things I would consider when implementing DNSSEC on campus. Firstly, DNSSEC adds a vast amount of complexity and lack of transparency for errors that make it far harder for us to spot and fix issues as they arise, so we must understand the structure and function of DNSSEC thoroughly before implementing it. Secondly, there will be increasing opportunities for Internet communications breakdowns since currently the market is lack of application providers implementing DNSSEC. The potential Internet breakdown is obviously a major factor when consider implementing DNSSEC on campus. In conclusion, we should concede that despite the merits of DNSSEC mentioned above, there are few awards for an large cooperation such as CMU to actually run DNSSEC on Internet today, since most ISPs aren’t validating yet, and most applications aren’t yet DNSSEC savvy.(13) As a CIO of CMU, I would not recommend implementing DNSSEC on campus for the moment.







(6) http://www.darkreading.com/security/news/227300083



(8) http://www.internetdagarna.se/arkiv/2008/www.internetdagarna.se/images/stories/pdf/domannamn/Steve_Crocker_administrationofDNSSEC.pdf

(9) http://www.nlnetlabs.nl/projects/DNSSEC/history.html

(10) http://www.DNSSEC.net/

(11) http://www.rfc-archive.org/getrfc.php?rfc=4034

(12) http://www.techrepublic.com/blog/networking/DNSSEC-whats-the-fuss-all-about-and-what-does-us-homeland-security-have-to-do-with-it/234



The Perils of a Virtual Data Center

13 06 2012

There is little debate that virtualization technology has been one of the key drivers for innovation in the enterprise data center. By abstracting services from hardware into manageable containers, virtualization technology has forced IT engineers to think outside of the box, allowing them to design systems that actually increase service availability while decreasing the physical footprint and cost. And these innovations haven’t stopped with advanced hypervisors or hardware chipset integrations. Innovations in the virtualization space have forced the whole of the IT industry into a new paradigm, bringing along a cascade of innovations in networking, storage, configuration management and application deployment. In short, virtualization ushered in a whole new ecosystem into the data center.

The only problem is that the security folks seem to have been left out of this ecosystem. According to the Gartner Group, 60% of all virtualized servers will be less secure than the physical servers they’ve replaced.[i] The reason, Gartner claims, is not that virtualization technologies are “inherently insecure”, but instead, virtualization is being deployed insecurely. Gartner goes on to enumerate risks in virtual infrastructure that boil down to a lack of distinction between the management and data planes and a dearth of tools to correctly monitor the virtual infrastructure. This observation, when extrapolated to include this new ecosystem of data center technologies layered above or below the hypervisor, illustrates exactly why the security community needs to become more involved in the actual process of building and designing data center architectures.

Flexible Topologies vs. Stringent Security

Security and network engineers have long relied on strictly controlled, hierarchical data center topologies where systems are architected to behave in a static fashion. In these environments, data flows can be understood by “following the wires” to and from nodes. Each of these nodes fulfils discrete functions that have well-documented methods for securing data and ensuring high availability. But once virtualization has been introduced into this environment, the structure and the assurances can disappear. Virtual web servers can in one instant intermingle with their backend database servers on a single hypervisor, and data flows between these virtual machines no longer have to pass through firewalls. In another instant, the same database servers could travel across the data center, causing massive amounts of east-west traffic, thereby disrupting the heuristics on the intrusion prevention and performance monitoring platforms.

Adding further confusion to these new topologies, new “fabric” technologies have been released that bypass the rigidly hierarchical Spanning Tree Protocol (STP).  STP ensured that data from one node to another followed a specifically defined path through the network core, which made monitoring traffic for troubleshooting or intrusion prevention simple. These fabrics (such as Juniper QFabric, Brocade VCS, and Cisco FabricPath) now allow top of rack switches to be configured in a full mesh, Clos or hypercube topology[ii] with all paths available for data transmission and return. This means that if an engineer wants to monitor traffic to or from a particular host, they will have to determine the current path of the traffic and either engineer a way to make this path deterministic (thereby defeating the purpose of the fabric technology) or hope that the path doesn’t change while they are monitoring.

The flexibility afforded by virtualization can cost dearly in security best practices. For instance, a typical security best practice is to “prune” vlans– removing these layer two networks from unnecessary switches in order to prevent unauthorized monitoring, man in the middle attacks, or network disruptions. But in the virtualized data center, this practice has become obsolete. Consider the act of moving a virtual server from one hypervisor to another. In order to accomplish this task, the vlan that the virtual machine lives on must exist as a network on an 802.1Q trunk line attached to both servers. If each of these servers is configured to handle any type of virtual machine within the data center, all vlans must exist on this trunk, and on all of the intermediary switches- producing substantial opportunities for security and technical failures, particularly in multitenant environments.

Prior to the introduction of data center fabrics, most network engineers segregated different types of traffic on different layer 2 domains, forcing all inter-vlan communication to ingress and egress through distribution layer routers. However, even this most basic of controls can be purposefully defeated with new technologies like VXLAN[iii] and NVGRE[iv]. These protocols allow hypervisor administrators to extend layer two domains from one hypervisor logically separated from another hypervisor with layer 3 boundaries by simply encapsulating the layer two traffic within another, abstracted layer two frame before being handed off to the layer 3 device. This obviates the security controls that the network provided, and even could even allow a vlan to be easily extended outside of a corporate network perimeter. This possibility illustrates yet another risk in virtualization technologies: separation of duties.

Management and monitoring

In the traditional data center, the separation of duties was easy to understand and segment. Network, systems, and storage engineers all worked together with minimal administrative overlap while security engineers formed a sort of protective membrane to ensure that this system stayed healthy. Yet in the virtual realm, all of these components intermingle and they can be logically managed from within the same few management interfaces. These interfaces allow systems engineers to make changes to networking or storage infrastructure, and network engineers to make changes to hypervisors or virtual machines, and so forth. As networks supporting virtual infrastructures have converged, storage management policies have blurred into the network management realm as switches and routers increasingly transport iSCSI or FCoE traffic in addition to traditional data or voice packets.

None of this would matter quite as much if these new technologies were easy to monitor. But without topographical consistency or separate management domains, monitoring becomes another interesting challenge. The old-fashioned data center typically relied upon at least three levels of monitoring: NetFlow for records of node-to-node data communications, SNMP, WMI or other heuristics-based analysis of hardware and software performance, and some type of centralized system and application logging mechanism such as Syslog or Windows Event Logs.

But in the virtualized data center, the whole doesn’t add up to the sum of its parts. While system logging remains unchanged, it stands alone as the only reliable way of monitoring system health. Meanwhile NetFlow and SNMP are crippled.  A few years ago, Netflow in virtualized environments was completely absent. VM to VM traffic that didn’t leave a hypervisor just simply didn’t create NetFlow records at all.  Responding to the issue, most vendors added some amount of Netflow accounting[v] for a price premium. However, the version implemented (v5) still does not support IPv6, MPLS, or VPLS flows. Furthermore, since PCI buses and PC motherboards are not designed for wire speed switching, there are reports that enabling NetFlow on hypervisor virtual switches can result in serious performance problems.[vi]

However, if NetFlow is successfully enabled on the virtual environment, there are still a few ways it can be broken. Once a VXLAN or NVGRE tunnel is established between two hypervisors, the data can be encrypted using SSL or IPSec. These flows will only be seen as layer three communications between two single hypervisors, even if in reality a dozen or more machines are speaking to an equivalent number on the remote hypervisor. The NVGRE/VXLAN problem combined with the new fabric architectures and the dynamic properties of virtual machines mean that heuristical analysis of virtual data center performance is much less feasible than in static data centers. In a static environment, security engineers can set thresholds for “typical” amounts of data transfers between various nodes. Once system administrators have the capability to dynamically distribute virtual machines across a data center, these numbers become meaningless, at least until a new baseline for traffic analysis is established.

A (Fabric)Path forward

So where does this leave the security engineers who’d like to at least try to keep a handle on the security of their most critical systems in a virtualized data center? Well, first, they can take comfort in that although these technologies are on the horizon, few companies have moved beyond the most basic of virtualization infrastructures. VXLAN and NVGRE are still IETF drafts, which means that even though Cisco, Microsoft and VMware already support them, the standards have not yet been widely adopted by other vendors. However, even if the equipment made by these vendors are in a data center, it’s likely that VXLAN or NVGRE are unnecessary for most organizations.[vii] Similarly, the new data center fabric architectures haven’t yet seen wide adoption because they cost enormous amounts of money[viii] and they require a massive data center overhaul- from equipment replacement to new fiber plants[ix].

Also, despite the numerous security gaps created by new virtualization technologies, data center equipment vendors are aware of the problems and engaging in finding solutions. Cisco released a bolt-on virtual switch[x] that can be utilized to separate the management of networking equipment from the virtual systems environment while also terminating VXLAN connections s to allow for security monitoring on VXLAN tunnels. A number of vendors have introduced virtual firewall appliances[xi] that can provide continuous protection even when virtual machines are moved out of the path of a physical firewall. Even SNMP/WMI monitoring gaps are being bridged by vendors who have developed virtualization-aware technologies[xii] that detect virtual machine locations and smooth baseline heuristics once a machine has migrated to another location.

So, all hope is not lost.  Depending on where the architecture team is in their research or implementation of these technologies, security engineers are likely to have an opportunity to get a seat at the table and will have a bourgeoning security toolkit at their disposal, which can help them to get a hold of the process before it gets out of hand.

Network Reconnaissance: The Hacker’s Pre-Attack

10 04 2012

by Jim Forystek

Perhaps the majority of computer attacks occur without the perpetrator gaining physical access to the victim’s PC.  In other words, the perpetrator or attacker gains access to the victim’s PC via network.  But how does an attacker access information on a victim’s PC in an environment that appears to be relatively secure?  An attempt to gather unauthorized information on a network PC is not automatic.  The events leading up to the attack are usually subtle, requiring the perpetrator to snoop around a network until he or she finds something on interest.  The attacker usually sizes up his victim by utilizing several techniques to identify where a destination host PC may be vulnerable.  Andrew Landsman has identified five common phases of a hacker’s approach [LAN09]:

  • Business Reconnaissance
  • Network & System Scanning
  • Gain Access to Networks and Applications
  • Maintain Access
  • Cover Tracks

The focus of this blog is on the second of Landsman’s five phases; Network and System Scanning.  Network & System Scanning, also known as ‘port scanning’, is a fundamental feature to the TCP/IP protocol – a query that returns services running on a PC.  All that is required for one to start scanning ports is port scanning software installed on a PC that is connected to the Internet.  For example, Nmap is a free software utility which can quickly scan broad ranges of devices and provide valuable information about the devices on a network.  It can be used for IT auditing and asset discovery as well as for security profiling of the network [BRA12].  For a particular IP address, the port scan software will identify which ports respond to messages (packets) and which of several known vulnerabilities seem to be present.  According to Pfleeger, port scanning will reveal three things to an attacker [PFL11]:

  • Which standard ports or services are running and responding on the target system
  • What operating system is installed on the target system
  • What applications and versions of applications are present

How does one scan ports?  There are several different port scanning techniques available.  These techniques range from rudimentary to expert/complex.  The latter may include a combination of port scanning techniques to achieve information.  It should go without saying that the port scan technique used is proportional to the scanner’s level of knowledge about the subject.  A commonly used command within the Nmap port scanning software is ‘TCP connect()’.  The TCP connect scan is named after the connect() call that’s used by the operating system to initiate a TCP connection to a remote device [MES11].  The TCP connect() scan uses a normal TCP connection to determine if a port is available.  According to Messer, this scan method uses the same TCP handshake connection that every other TCP-based application uses on the network.  In a TCP connect() scan operation, a source host sends a packet to a destination host and awaits a response.  If the response is ‘RST’ (reset) from the destination port, then the destination port is closed and the port scan will yield very little information to the inquirer.  However, if the response from the destination port is ‘SYN/ACK’, then the destination port is open and more willing to communicate potentially valuable information to the inquirer.

What can open ports reveal to a hacker?  Probing the network can reveal vulnerabilities.  The intent is to gain information and services that the hacker should not have access to.  This is where hackers learn more about firewalls, routers, IDS systems and other network components.  This ultimately leads to information about know vulnerabilities of network devices.  Open ports can lead to a hacker gaining direct access to services and possibly internal network connections [LAN09], which is phase three of Landsman’s definition of the hacker’s approach.  Port scanning is one of the most popular reconnaissance techniques attackers use to discover services that they can break into.  All machines connected to a network may run many services that listen and well-known, and not-so-well-known ports.  A port scan helps an attacker find which ports are available, i.e., what service might be listening to a port.  The type of response received from a port scan indicates whether the port is used and can therefore be probed further for weakness [MAT10].

Scanning ports within a network to determine available services is not illegal, so how does one prevent unwanted port scanning?  One cannot fully prevent port scanning without compromising their ability to communicate over a network.  However, there are a couple of things one can do to reduce their vulnerability during an unwanted port scan.  First, one can disable all unused services on your PC.  This can be accomplished by installing Nmap and scanning one’s own PC to see if there is anything of interest, then turning off what is not necessary.  Second, one can leverage a firewall to filter scan requests.  Your firewall can reply to a port scan in three ways; open, closed or no response [COB06].  Open ports are the most vulnerable, for obvious reasons.  If vulnerabilities exist on open ports, then one can patch the weakness, which will reduce the risk of being attacked.  A closed port will respond with a message indicating that it is closed, and ‘genuine’ requests will stop making attempts to query the port.  If repeated attempts are made, the firewall can log these unnecessary attempts and block the source IP from future scans.  ‘No response’ is similar to closed, but the destination IP will not respond to the source.

In summary, understanding port scanning and how it can reveal vulnerabilities is much like controlling the doors to your house.  Completely blocking off all traffic to your house may increase the safety of your home, but it does not provide an efficient method to enter and exit.  A more effective method is to install reliable locks and distribute keys to trusted members so they can freely enter and exit under controlled circumstances.  Whether one is controlling the doors to their house or ports within their PC, a disciplined and well-informed approach must be taken to ensure assets remain safe.


[BRA12] Bradley, Tony.  Nmap Network Mapping Utility.  2012.  Can be found at: http://netsecurity.about.com/od/securitytoolprofiles/p/aaprnmap.htm

[COB06] Cobb, Michael.  How to Protect Against Port Scans.  2006.  Can be found at:  http://searchsecurity.techtarget.com/answer/How-to-protect-against-port-scans

[LAN09] Landsman, Andrew.  The Five Phase Approach of Malicious Hackers.  May 8th, 2009.  Can be found at: http://blog.emagined.com/2009/05/08/the-five-phase-approach-of-malicious-hackers/

[MAT10] Mateti, Prabhaker.  Port Scanning.  2010.  Can be found at: http://www.auditmypc.com/port-scanning.asp

[MES11] Messer, James.  Secrets of Network Cartography: A Comprehensive Guide to Nmap.  2011.  Can be found at: http://www.networkuptime.com/nmap/page3-3.shtml

[MIT12]  Mitchell, Bradley.  What is a Port Number?   2012.  Can be found at: http://compnetworking.about.com/od/networkprotocols/f/port-numbers.htm

[PFL11] Pfleeger, Charles P. and Lawrence Pfleeger, Shari.  Security in Computing, Fourth Edition.  Prentice Hall, 2011.