Public clouds help everyone! The good, bad, and ugly…

21 03 2013

The Good

Public clouds offer a growing set of capabilities to consumers and its adoption is only growing. Gartner predicts compute specific services will grow to $20.2 billion in 2016 (qtd. in Columbus).  Whether you are looking for capital reduction, ease of access, quick provisioning, or the ability to scale massively, cloud takes the hard work out of it. Public cloud operating models support online registration and pay-as-you-go billing which allows anyone with a credit card to consume the service.  This new consumption model allows organizations and individuals to consume potentially massive amounts of resources with minimal upfront costs or technical know-how.

The Bad

Because public clouds are designed to accommodate the largest of capacity requests, they are typically built with massive supporting infrastructures and have access to near limitless bandwidth.   As access typically only requires a credit card, it is readily available to anyone; even fraudulent consumers, hackers, or cyber terrorists.  These “aggressors” can access and use cloud resources by registering with stolen credit cards or by compromising exposed resources if poorly protected.  It does not take much effort or money to get a stolen credit card in an “Amazon-Like Online Bazaar” (Riley). The fact is that fraudulent consumers can sign up through an automated system, make use of a stolen card, and begin to provision resources without anyone physically verifying their identity.  As these models are typically pay-as-you-go monthly services it can be weeks before a fraudulent consumer is identified through a failed billing.  Since fraudulent activities can last for several weeks before remediation occurs, these aggressors can consume the resources and conduct their business at the cost of the provider.

The Ugly

Not only are these aggressors able to utilize a cloud for weeks, they are accessing resources that are “unlimited and can be appropriated in any quantity at any time” (Mell and Grance 2).  This creates a burstable resource that may not have been available to fraudulent use in the past. Large infrastructure and bandwidth are generally expensive, it would be risky for aggressors to procure and operate a legitimate environment for illegitimate uses.  The risk of seizure would always be a concern. With massive cloud infrastructures aggressors can provision, clone, and migrate systems around the world faster than ever possible with physical infrastructure and without complicated malware. If IP addresses get blacklisted, they simply request a new one from the system and they are back online.  Assuming for a moment you have access to three public cloud providers for only 20 days, each with five sites to provision to, each averaging five minutes per clone of a virtual machine, an aggressor could provision more than 82,000 virtual machines in 19 days.  This is more than enough to spend a day causing havoc with a large wide spread distributed denial of service attack. In just the time it would take to identify and process mitigation strategies even the largest of targets could be jeopardized.  Though unlikely, the idea of cloud as an asset for aggressors on the internet should be acknowledged. What are the possibilities with this kind of resource in the wrong hands?

The Problem

Who is ultimately responsible for ensuring legitimate use of these massive public clouds? Is the service provider wholly responsible? Surely a provider cannot be expected to analyze all packets that transverse its network in search of malicious intent. Or should they? It will drive costs up though may be unrealistic in some situations.  Service providers do share responsibility in reducing the amount of fraud in these environments as it reduces available resources for legitimate customers.   As a public cloud operator and evangelist of cloud services, I believe that these issues must be dealt with as a community.  Everything is going to the cloud, it important that organizations update their business continuity plans and practice a layered defense.  Service providers must also develop policies and procedures to support the identification and removal of fraudulent consumers and aggressors.  Finally, government agencies need to update policies and processes to deal with evidence gathering and forensic operations in these large multi-tenant environments.


Columbus, Louis. “Forecasting Public Cloud Adoption in the Enterprise.” Forbes. Forbes Magazine, 02 July 2012. Web. 04 Feb. 2013.

Mell, Peter, and Timothy Grance. NIST Definition of Cloud Computing. Publication no. 800-145. Gaithersburg: National Institute of Standards and Technology, 2011. Print

Riley, Michael. “Stolen Credit Cards Go for $3.50 at Amazon-Like Online Bazaar.” Bloomberg. Bloomberg L.P., 19 Dec. 2011. Web. 04 Feb. 2013.

Designing Non-Observable Passwords

19 03 2013

It is said that a system is only as secure as its weakest link.  It probably comes as little surprise that human beings are often cited as the weakest link when it comes to information security — often undermining system security keeping PIN codes in their wallets or even taping a password right onto a monitor.  Criminals in search of personal information need only target the user to find what they need.

But what poses a much bigger security vulnerability is that oftentimes, for criminals in search of this information, the easiest method to gaining access to the system is usually to just observe as users input their passwords or PINs directly into a system’s user interface (3).

To help solve this growing issue, password and PIN creation has had to evolve to meet increasing security violations. Criminals were able to access passwords which forced system designers to implement more stringent rules.  For instance, PIN numbers, which are usually 4-digits, were in some cases forced to be 6 or even 8-digits long.  Passwords now have rules such as needing to be 8 characters long and include a symbol, a number, and a capital letter.

But the real question is, no matter how long or complex our PINs or passwords are, if a criminal can actually see the input of the information onto a keypad or screen, how effective could that password really be?  Yearly losses due to this security vulnerability has been said to be nearly $60 million in the US  (1).

This is the fundamental problem with visual passwords today: they are too easy to observe.  But researchers have been trying to solve this problem by developing unobservable password and PIN input techniques.  This post will quickly summarize and discuss a few of the current research projects in this area and the inherent advantages and limitations of each.

Integrating an Unobservable Process with the Traditional Process

VibraPass is a system that has been created to work in conjunction with current ATMs (1).  VibraPass is unique in that it offers a second level of protection to an ATM by leveraging mobile phone devices.  The way it works is that a user hooks up their smartphone device to the ATM terminal, and each time the phone vibrates, the user knows that the next input in their password would be a “lie”.  A person trying to observe the input would be confused an unable to decipher what the real password is.

The concept behind this system is effective and, most importantly, user-friendly, as it builds upon the current easy-to-use PIN process.  The downside, however, which VibraPass admits, is that repeated observation by the criminal would eventually give away the pattern and allow them to discern the real password.  “The main weakness of VibraPass is that repeated observations can lead to successful attacks by analyzing the differences between inputs. The highest success rate for an attack can be assumed if the lie overhead is known by the attacker.” (3)

Looking at the security results of the VibraPass, we learn that 1 in 10,000 or less were able to observe the password, and that it was very weak against two or more observations in particular  (1).

Combining Audio and Sensory Perceptions into Password Creation

Spinclock is a password application developed to work on touchscreen mobile devices.  Spinclock combines several cognitive functions to create a secure, unobservable password process (1).

Ramesh_fig 1

Figure 1: A design view of the Spinlock application (2)

Figure 1 above shows the basic design of the application and how the settings work. Spinlock works in much the same way as a physical dial lock with incremental numbers and audio or haptic cue.  The user would go to their settings and select a random combination for the password.  Then they’d select the circle and spin it in the correct direction and begin to count.  Unlike a physical lock, where going three to the right had a designated position on the dial, Spinlock provides completely random auditory or haptic queues to notify the user when they have moved one “space”.  This makes it difficult for an observer to understand how many positions the user has moved on the lock (2).

Some of the disadvantages of this system would be the randomization of the sensory cues can cause confusion on the users themselves, leading to higher levels of input error when compared to traditional PIN or password input methods. “Also, the majority of errors (78%) involved entering digits one higher or lower than the target item. Comments by participants provided a feasible explanation for this; several spontaneously remarked that the randomly distributed nature of the cues made predicting the location of the final target challenging. In particular, several mentioned that unintentionally overshooting the target item was the most frustrating aspect of the experiment.” (2)

Looking at the security results of the Spinlock, we learn that 1 in 10,000 were able to observe the Spinlock password, and that multiple observations had no effect on this number (1).


From a user-experience perspective, it is likely that the traditional, visual-based password and PIN system will usually have a higher level of user input accuracy than an auditory or haptic-based system, and will also likely be faster and more efficient.  However, when we look at the security studies, it is clear that the non-visual password systems are much more effective against observation.

Additionally, I believe there is a level of comfort and familiarity between users and the long-known password and PIN system.  Use of auditory and haptic systems might be a little frustrating to understand and use in the beginning, but over time I feel that it will become a norm which people will learn to use.

Besides, if learning to get used to listening or feeling for my password is the alternative to having to memorize a 12-digit code that needs to have at least two capital letters, a symbol, three numbers and needs to be changed every 6 months, then I’m gladly open to learning something new.


  1. Bianchi, Andrea, Ian Oakely, and Dong Su Kwon. “Open Sesame: Design Guidelines for Invisible Passwords.” Computer April (2012): 58-65. Print.
  2. Bianchi, Andrea, Ian Oakley, and Dong Su Wong. Spinlock: A Single-Cue Haptic and Audio PIN Input Technique for Authentication. Tech. N.p.: Springer-Verlag Berlin Heidelberg, 2011. Print.
  3. De Luca, Alexander, Emanuel Von Zezschwitz, and Heinrick Hußmann. “VibraPass – Secure Authentication Based on Shared Lies.” Proc. Conf. Human Factors in Computing Systems (2009): 913-16. Print.


5 12 2012

The IPv6 (Internet Protocol version 6) was developed in order to address the impending shortage of address space that was a serious limiting factor to the continued usage of IPv4. The Internet Engineering Task Force (IETF) initiated it as early as 1994 (1).

The worldwide deployment of IPv6 traced back to July 1999, on which the major industry players and corporations around the world including manufactures, Research & Development institutions, Education organizations, Telecom Operators, Consulting companies and many others joined together in a nonprofit organization named “IPv6 Forum”(2). From that day on, the process of global deployment of IPv6 has been speeding up significantly. By now, IPv6 can be viewed as 21st century Internet to some extent. The current status of deployment of IPv6 around world is very promising: USA has issued a mandate to all vendors to switch to an IPv6 platform by summer of 2008. A consulting and R&D firm in Canada has also developed a tunnel server which allows any IPv4 node to be connected to the 6Bone.

However, considering the fact that only 2 commercial IPv6 address ranges have been allocated in North America, this indicates that the operational deployment of IPv6 in North America may progress slow since the problems of IPv4 shortage was not that urgent in those area yet. On the other hand, both Asian and European have strong support for the deployment of IPv6. China initiated a five year plan (China’s Next Generation Internet) with the objective of implementing IPv6 early, and put on display the new IPv6 deployment during Olympics in Beijing. The mobility industry in European is also a strong supporter of the transition to IPv6, and the European Telecommunications Standards Institute and the IPv6 Forum have also established a cooperation agreement. There are new actions about deployment of IPv6 everyday around the rest of the world as well. (3)

IPv6 has serious of new security features compared to IPv4. The first thing needs to be mentioned here is that IP security (IPsec) is part of IPv6 protocol suite, and it is mandatory. (4) IPsec is a set of Internet standards that uses cryptographic security services to providing confidentiality, authentication and Data Integrity. Although IPv4 also adapted IPsec as optional property, data is secured from the originating host to the destination host via various routers in IPv6 whereas only between border routers of separate networks in IPv4. (5) IPsec has a fundamental concept named Security Association (SA). SA is uniquely identified by the Security Parameters Index, destination IP address and security protocol. It is a one-way relationship between sender and receiver that defines the type of security services for a connection. IPv6 also has an authentication header (AH)which provides data integrity, anti-replay protection and data authentication for the entire IPv6 packet. In addition, the Encapsulating Security Payload (ESP) Header provides confidentiality, authentication and data integrity to the encapsulated payload. (4)

The new security features provided by IPv6 are significant improvements over IPv4 along with other new features of IPv6. However, it has created several new security issues.  Firstly, the strength of the encryption algorithms to be used to ensure global interoperability is limited due to export laws. Secondly,             public-key infrastructure (PKI) has not been fully standardized, which can be a problem to IPsec since it relies on PKI. Furthermore, there still exits flaws in against Denial of Service and flooding attacks. And there’s also the potential for inadvertent confusion among routers with the ability to change IP addresses, the generated traffic may look like a DDos attack to an IPv4 firewall. Besides, misconfiguring IPv6 systems is still big threat to organizations. (6)

As a CIO of CMU, the first and most important thing when consider implementing IPv6 on CMU campus is that we must not compromise the security of the site. Many of common threats and attacks on IPv4 also apply to IPv6, and on the other hand, many new threat possibilities do not appear in the same way as with IPv4. To begin with, I will mark the reconnaissance more difficult via proper address planning in order to prevent attackers from quickly understand the common addressing for the campus. I will also plan the control of management access to the campus switches carefully, implement IPv6 traffic policy and Control Plane Policing by Controlling IPv6 traffic based on source prefix that can help protect the network against basic spoofing (6).  However, despite the drawbacks and new security issues mentioned above, the benefits of IPv6 outweigh its shortcomings since IPv6 provides auto configuration capabilities, direct addressing, much more address space, built in IPsec and interoperability and mobility capabilities which are already widely embedded in network devices. As a CIO of CMU, I will certainly deploy IPv6.(7)


DNSSEC, stands for DNS Security extensions, was designed to add security to DNS and protect the Internet from certain attacks. It was first addressed by Steven Bellovin in his paper in 1995, the final design standardized in RFC 4033-35 March 2005 by IETF (8).

The following two figures represent the level of DNSSEC deployment in the word to date.  Those countries marked green have deployed DNSSEC today. Those marked yellow have plans to deploy it in the near future.



We can see from the figures above that most countries in European and north America have deployed DNSSEC.(9)

DNSSEC was designed to protect the internet from certain attacks, such as DNS caching poisoning.(10) It is a set of extensions to DNS which provides origin authentication of DNS data, data integrity and authenticated denial of service. It has several new resource record types to add security: Resource Record Signature (RRSIG), DNS Public Key (DNSKEY), Delegation Signer (DS), and Next Secure (NSEC).(10) DNSSEC uses public key cryptography to sign and authenticate DNS resource record sets (RRsets).  Digital signatures are stored in RRSIG resource records and are used in the DNSSEC authentication process. The DS can refer to a DNSKEY by storing the key tag, algorithm number and a digest of the DNSKEY. The NSEC resource record lists two separate things: the next owner name that contains   authoritative data or a delegation point NS RR set, and the set of RR types present at the NSEC RR’s owner name.(11) DNSSEC also has two DNS header flags namely Checking Disabled (CD) and Authenticated data (AD), it also support for the DNSSEC OK (DO) EDNS header bit so that a security-aware resolver can indicate in its queries that it wishes to receive DNSSEC RRs in response messages. DNSSEC protects clients from forged data by digitally signing DNS records. Clients can use this digital signature to check whether or not the supplied DNS information is identical to that held on the authoritative DNS server. It will also be possible to use DNSSEC-enabled DNS to store other digital certificates; this makes it possible to use DNSSEC as public key infrastructure for signing of e-mail. (12)

However, DNSSEC also introduce some new security issues. Firstly, DNSSEC must be able to report when a name is not found, and providing a signed “not found ” record for a name may cause a denial of service while a unsigned record could easily be spoofed. In addition, since DNSSEC will return a pre-signed report containing a range of names which do not exist and could be signed offline ahead of time. This will give attackers much more information about the network.(12)

As a CIO of CMU, here are few things I would consider when implementing DNSSEC on campus. Firstly, DNSSEC adds a vast amount of complexity and lack of transparency for errors that make it far harder for us to spot and fix issues as they arise, so we must understand the structure and function of DNSSEC thoroughly before implementing it. Secondly, there will be increasing opportunities for Internet communications breakdowns since currently the market is lack of application providers implementing DNSSEC. The potential Internet breakdown is obviously a major factor when consider implementing DNSSEC on campus. In conclusion, we should concede that despite the merits of DNSSEC mentioned above, there are few awards for an large cooperation such as CMU to actually run DNSSEC on Internet today, since most ISPs aren’t validating yet, and most applications aren’t yet DNSSEC savvy.(13) As a CIO of CMU, I would not recommend implementing DNSSEC on campus for the moment.
















Enterprise Resource Planning systems

4 12 2012

Enterprise Resource Planning (ERP) systems integrate core business functions into one system that maintains all assets and resources. ERP applications are found in many companies and each system spans the entire company, often integrating with their customers and suppliers to become a single fluid system. With so many touch points on the system it is important to have procedures governing the policies and technology factors of each ERP system.  As an Undergraduate student, I had the opportunity to take an ERP systems course.  Throughout the class there were labs where the student used a SAP GUI interface to simulate a muffin-making company. Through this simulation, we had to use the ERP system to produce a large batch of muffins from the preliminary stages of acquiring raw materials all the way the production stages of mixing the ingredients, baking, and distributing. The final labs included the accounting and finance modules of the ERP system as well as a customer-relationship management component. While this lab was fictional, and each student had access to every part of the SAP ERP system, it demonstrated just how connected each part of the system was to every other module in the ERP system and how important a secure system is in an enterprise.

As ERP systems are being implemented and configured it is important to integrate security features from the start. Security can often be over-looked as companies strive to complete ERP projects on time and on budget. Security features should be factored into the development and deployment of an ERP system from the start to avoid major revisions to the system in the future. “ERP systems must be able to process a wide array of business transaction and implement a complex security mechanism that provides granular-level access to users” (Pandey 1). Having a system that can process large amounts of data across various departments while still being secure from unauthorized users or hackers can prove to be a challenge. Integration of suppliers and customers throughout the supply chain increases the number of authorized user accounts but also “introduces new entry points to business systems from outside the traditional IT security perimeter” (VanHolsbeck 1). This forward and backward integration of customers and suppliers on a collaborative ERP system can be a high vulnerability if critical measures are not taken to ensure security.

An ERP system consists of a three-tier client-server architecture. The first layer is the presentation layer that consists of a Graphical User Interface (GUI) that allows input to be entered and generates the output back to the user (She 154). The application layer uses the input entered from the presentation layer and processes it. The database layer manages the data for the entire company and often includes the Operating System and hardware components of an ERP system (She 154). In addition to each layer of the tier, ERP systems also use web-based services to complete tasks. A variety of mark-up languages including SAML (Security Assertion Markup Language) and XACML (XML Access Control Markup Language) can be used within an ERP system to aid in securing web technologies (She 162). ERP systems are easily customizable to different industries such as manufacturing, finance and banking, healthcare and retail firms. With the large amount of customization, companies should be aware of security issues with implementing an ERP system with custom codes for transactions, programs, roles and authorizations (Medvedovskiy 26). Since each ERP system contains a multitude of modules for each functional business area, patching weaknesses within the ERP can be very costly but are important for the longevity of the system.

ERP systems are most secure following the Role-Based Access control model. As personnel within a company move around and change jobs, their job description should determine what areas of the ERP system they have access to and what areas they no longer need to view. Following this access control model as well as the Principle of Least Privilege, companies can mitigate the insider threat by reducing their exposure. Constraints such as time and day restrictions should be in place to limit access for authorized users. If the company works with a decentralized system and there are multiple administrators, the most senior administrator should allow or deny access (She 158). Having thorough audit logs is another important component of a secure ERP system. With so many transactions across different departments, managers can often be concerned with the performance speed of the system if every transaction is being recorded. “In a compromise between security and performance, enterprises can avoid logging every detail of system activity and focus on meaningful information that’s relevant to the transaction” ( VanHolsbeck 2). Audit log systems can also be programmed to identify and alert an administrator if an anomaly occurs which would help utilize resources more efficiently. Since ERP systems also include maintaining financial accounting information, having efficient audit logs is necessary due to the Sarbanes-Oxley legislature from 2002. Along with the audit logs, enterprises should also practice sound internal control monitoring to be a deterrent to malicious insiders and work to protect the system (VanHolsbeck 4). Since each ERP system is company- wide it is vital to have a strong password policy in place to authorize use as well as a method to change the passwords if necessary. Allowing weak passwords for users on the ERP system could allow for outside attackers to gain proprietary knowledge about the business and cause damage. Purchasers of ERP systems should validate that vendors have a means to encrypt passwords that are stored on the system (Hughes 1). Encrypting passwords for the ERP system is another level of security that can protect the system if it was ever compromised.

A variety of different sized businesses are now using ERP systems as the costs of implementing and maintaining the systems continually decrease. Ensuring that all authorized users of an ERP system have secure access, while still achieving a high degree of availability, can be a continuous goal to achieve. Information security policies should not only focus on perimeter security relating to networks but also to in-house ERP systems that manage day to day business functions.


Medvedovskiy, Ilya, and Alexander Polyakov. “ERP Security. Myths, Problems, Solution.”Digital Security (2010): 1-75. Digital Security. Web. 6 Nov. 2012. <,%20Problems,%20Solutions.pdf&gt;.

Pandey, Santosh K. “Major Challenges in Auditing ERP Security.” IT Harmony, n.d. Web. 3 Nov. 2012. <;.

She, Wei, and Bhavani Thuraisingham. “Security for Enterprise Resource Planning Systems.”Information Systems Security 163rd ser. 16.152 (2007): 152-63. Information Systems Security. Web. 5 Nov. 2012. <;.

Van Holsbeck, Mark, and Jeffrey Z. Johnson. “Security in an ERP World.”, 24 May 2004. Web. 5 Nov. 2012. <;.

Memory Forensics

2 12 2012

Forensic Science

The word “forensic” comes from the Latin word “forensis”, which means “pertaining to courts of law” (Harper). In Forensic Science and Standards Act of 2012, forensic science was defined as “the basic and applied scientific research applicable to the collection, evaluation, and analysis of physical evidence, including digital evidence, for use in investigations and legal proceedings, including all tests, methods, measurements, and procedures” (Forensic Science and Standards Act of 2012, 112TH CONGRESS 2D SESSION, 2012).

Locard’s Principle

“Anyone or anything entering a crime scene takes something of the scene with them, or leaves something of themselves behind when they depart” (Saferstein, 2001).

When I first read this, it reminded me the “observer effect” of physics; it is impossible to measure any characteristic of a system without being a part of that system. In other words, the existence of observer changes the results of the measurement.

In a crime scene investigation, investigators have to show great care and responsibility to minimize the effects of the investigation process to the investigated phenomena. This is the main reason why investigators turn off all of the systems first by plugging off and then with the help of write blockers (special equipment to prevent a possible change to the disks being read) try to get the bit-by-bit image of the disks. Securing the integrity of data from any unwanted modification attempts is highly crucial for the investigation.

This in turn brings the loss of critical data on the volatile memory (RAM, CPU registers and caches) of the systems. When we turn off a computer, the data that was stored in volatile memory simply get lost because these devices were designed for fast access to data and they can store data only in the presence of electric currents. The transistors in these devices lose charge they are holding over time and get refreshed periodically. When we cut the power, memory transistors lose the charge (and therefore the data) in milliseconds.

To overcome the problem of losing volatile data two new approaches are being found attractive today;

  • Analysis of live systems
  • Memory forensics (Huebner, Bem, Henskens, & Wallis, 2007)

Memory Forensics

Memory forensics basically deals with analysis of memory images. For this, you have to have the memory dump (image) that was taken from the running machine. This can be done by a memory dumping utility like WinDD, WinEn or MDD. In Unix, dd command can be used to reach the memory and get an image of it. What dd does is, simply copying certain number of bytes from input stream (memory as a device under “/dev” in our case) to the output stream (a binary file).

Before we go further, we have to understand a basic point. For being able to get an image of memory we have to use a dump utility which will occupy a space on hard disk and when run, in the memory; and depending on the memory management and file system of the OS this will cause a change in the memory and hard disks, and can make you lose some valuable forensic data. Furthermore, the memory dump file also will occupy some space in the system. As a result, the forensic data that you gathered may not be used as evidence in courts. But this fact does not make memory forensics less valuable. If some forensic tools are implemented in the kernel level, we can expect memory forensics evidences be accepted as sound in the near future (Huebner, Bem, Henskens, & Wallis, 2007).

Memory dump files can have a variety of critical information that was stored by different processes and OS services. These include process information, open files, open connections, passwords and registry hives.

Open Source Memory Image Analysis Tool: Volatility

Volatility[1] is an open source set of analysis tools that was designed to extract forensic evidence from memory images of Windows and Linux machines. It was written in Python and has plugin support to give people a chance to extend its capabilities.

Volatility has a lot of internal modules to extract data about processes, network connections, open files etc. Below is a sample set of commands that come with Volatility framework:

In the Volatility 2.3 Release there are more than 120 internal commands.

pslist Lists the processes that were running on the system at the time of memory dump
psscan Finds also the processes that had been hidden by a rootkit
connections Shows connections that were active during memory dump
files Shows files that were opened by a process
strings Outputs strings in the dump file with corresponding virtual addresses
cmdscan Searches the memory for commands that attackers entered during a cmd.exe shell
getsids Gets security identifiers (SIDs) that were associated with processes
hivescan Scans memory image for well known patterns of registry hive structures

Table 1: Volatility commands (Volatility 2.3 release notes, 2012)

An Experiment and Results

I conducted an experiment with Volatility framework to better understand what critical data can be extracted from a memory image. For this experiment I created two small TrueCrypt[1] volumes, encrypted them with AES and mounted them with “Cache passwords and keyfiles in memory” option enabled for demonstration purposes.

Figure 1

Figure 1: TrueCrypt password dialog

This option is not enabled by default for security risks associated with it. At this point TrueCrypt cached passwords in RAM in an unencrypted fashion. Then, I took a memory dump with MDD[1].

Figure 2

Figure 2: Memory dump with MDD

For extracting keys from this image I used Jesse Kornblum’s “Cryptoscan” plugin[2] for Volatility framework. For the big size (around 3.5 GB) of the memory dump file, the scanning process took more than an hour and in the end the plugin could find the keys searched for:

Figure 3

Figure 3: Passwords in plain text

So, we can see that with this search plugin we could reveal the keys in a little more than an hour.


In this post, we tried to introduce memory forensics, talked about open-source Volatility tool that is commonly used to extract useful information from memory dumps, showed how a memory image can be taken and demonstrated extracting TrueCrypt keys with the help of Cryptoscan plugin. You have to load keys in memory (though not plaintext as in our example) for processor to use them to do encryption and decryption on the fly (Kaplan, 2007). So, although you have strong encryption, memory can reveal your keys and your state-of-the-art, unbreakable encryption will be of no value.


Forensic Science and Standards Act of 2012, 112TH CONGRESS 2D SESSION. (2012, July 12).

Harper, D. (n.d.). Forensic. Retrieved from Online Etymology Dictionary:

Huebner, E., Bem, D., Henskens, F., & Wallis, M. (2007). Persistent systems techniques in forensic acquisition of memory. Digital Investigation, 130-131.

Kaplan, B. (2007). RAM is Key, Extracting Disk Encryption Keys From Volatile Memory, Thesis Report. Pittsburgh: Carnegie Mellon University.

Saferstein, R. (2001). Forensic science handbook. Englewood Cliffs, NJ: Prentice Hall.

Volatility 2.3 Release Notes. (2012, Oct 24). Retrieved from Volatility, An advanced memory forensics framework:



Cyber Lawfare: Establishing Norms for Use of Cyber Weapons

1 12 2012

by Max Blumenthal

Cyberwar is upon us. That is the call being issued by top American cyber experts in the wake of increased attacks from Iran and China. The U.S. is also stepping up its offensive cyber capabilities. As Secretary of Defense Leon Panetta stated, “We are facing the threat of a new arena in warfare that could be every bit as destructive as 9/11” (Thompson). These attacks are often directed at private enterprises that are considered critical infrastructure, such as banks and utility companies. In conventional warfare, there is a clear distinction between attacking strategic targets and protecting civilians. In cyberwar, no such distinction currently exists. One way of beginning to protect civilians in a cyber conflict is to create a treaty for international humanitarian law for cyberwarfare (Schneier). This treaty should be modeled after previous international humanitarian law, such as the Geneva Conventions and arms limitations treaties.

Geneva Conventions

The four Geneva Conventions are internationally agreed upon rules for nation-state conduct in warfare created after the tragic loss of life for tens of millions of civilians during World War II. The first Geneva Convention requires states to protect wounded soldiers as well as refrain from targeting medical personnel in a combat zone. The second Convention allows neutral parties to care for the wounded without being attacked by either side of a conflict. The third Convention extends protections for non-State actors, while the fourth Convention prevents collective punishment. Additional protocols prevent perfidy and indiscriminate attacks on civilians targets or total war (Red Cross).

In cyberwarfare, attacks should also respect these established norms. Perhaps the most important, yet most challenging to enforce, of these conventions is the prohibition against perfidy. Neil Rowe, of the Naval Postgraduate School, argues that most cyber-attacks are a form of perfidy in that they masquerade as a legitimate program, but carry a malicious payload. When the payload is discovered, some attacks may try to frame another target to avoid reprisal attacks. Rowe suggests that to prevent wrongful attribution of an attack, digital signatures could be required on cyber weapons to reduce the risk of collateral damage (Rowe).To allow for concealment of an attack while still providing attribution, these “signatures could be hidden steganographically”. The fourth Geneva Convention also offers an important  rule for cyberwarfare, prohibition against collective punishment. Unrestricted cyberwarfare should be eliminated. This means attacks on vital civilian systems, such as water treatment facilities and the financial system, should not occur because they provide little military benefit, but create massive civilian harm.

Arms Limitation or Weapons Ban

The Strategic Arms Limitation Talks Agreements (SALT I and SALT II) sought to halt Soviet and American nuclear ballistic missile launcher production. In cyberwar, an arms limitation treaty has been championed by Russian and China and recently won the consideration of the United States (Gorman).Such a treaty could allow for cyber weapon development and usage for certain military systems, but outright ban weapons that seek to attack civilian infrastructure or military command and control systems. The greatest difficulty with such an agreement is enforcement. Unlike a physical weapon, it fairly easy to conceal a cyber weapon from inspectors (Goldsmith). Also, a treaty does not necessarily prevent countries from giving weapons technology to non-state actors, the main road-block for U.S. adoption of the Russian proposal.

In contrast to an arms limitation treaty, an all out ban on certain weapons has also proven effective for certain weapons. For example, the Biological Weapons Convention prohibits the production and use of biological and toxic arms in warfare. The reason for an all-out ban on biological weapons is that this kind of warfare was deemed indiscriminate and “abhorrent” (Red Cross) even in war. Poorly designed cyber weapons have the potential to have significant unintended consequences. For example, a U.S. cyber attack on Iraq’s financial system in 2003 was prevented, because “Bush administration officials worried that the effects would not be limited to Iraq but would instead create worldwide financial havoc” (Markoff and Shanker). Like an arms limitation treaty, enforcement would be difficult, but inspectors will only need to find evidence of a cyber weapon’s development instead of determining the target of the weapon. Bruce Schneier recognizes that while this may be the ideal policy, a ban on “unaimed or broadly targeted weapons” (Schneier) would also have a significant positive effect and be easier to implement.


Besides a number of enforcement concerns, a treaty’s effectiveness is also hindered by the gray area that separates cyber war and cyber espionage. A treaty would need to govern computer network attacks, but still allow for computer network exploitation. An all out cyber weapons ban is unlikely to happen, but it is possible that certain weapons, such as those that target SCADA units, or targets could be banned. An arms limitation treaty offers a more moderated approach that allows for some production and testing of weapons, but requires an unrestricted inspections, which may be difficult for rival nations to agree to. Finally, a treaty for cyberwarfare provides an opportunity to establish rules of engagement in cyberspace and has the potential to improve protections for civilians and limit the development and deployment of cyber weapons determined to be so destructive that they are immoral, even in warfare.


  1. Goldsmith, Jack. “Cybersecurity Treaties: A Skeptical View.” 9 March 2011. Hoover Institute Task Force on National Security and Law. 29 October 2012                   <;.
  2. Gorman, Siobhan. “U.S. Backs Talks on Cyber Warfare.” 4 June 2010. Wall Street Journal. 29 October 2012                   <;.
  3. Markoff, John and Thom Shanker. “Halted ’03 Iraq Plan Illustrates U.S. Fear of Cyberwar Risk.” 1 August 2009. New York Times. 29 October 2012                   <;.
  4. Red Cross. “Chemical and biological weapons.” 29 October 2010. International Committee of the Red Cross. 29 October 2012 <;.
  5. —. “The Geneva Conventions of 1949 and their Additional Protocols.” International Committee of the Red Cross. 29 October 2012 <    law/geneva-conventions/index.jsp>.
  6. Rowe, Neil. “War Crimes from Cyberweapons.” Journal of Information Warfare 6.3 (2007): 15-25.
  7. Schneier, Bruce. “Cyberwar Treaties.” 14 June 2012. Schneier on Security. 29 October 2012<;.
  8. Thompson, Mark. “Panetta Sounds Alarm on Cyber-War Threat.” 12 October 2012. Time. 29 October   2012 <      threat/#ixzz2A9hs0hIX>.

Online Gaming: Real Money, Real Threats

30 11 2012

by A.J. Holton


Today millions of people across the world are joining together over the Internet to immerse themselves in the virtual world of gaming.  MMORPGs (Massively Multiplayer Online Role Playing Games) are the top guns of the industry, boasting millions of subscribers worldwide.  “New World of Warcraft® expansion sells 2.7 million copies in first week — global subscriber base passes 10 million” (“Alliance and Horde Armies”). This is a game which has been out for 8 years, and it still has many subscribers paying roughly $15 dollars a month for service.  Games like Blizzard’s World of Warcraft are constantly being exploited through cheats and account hacking.  Guild Wars 2 was just released late August 2012 and had problems with account security that day with more than 11,000 accounts being exploited due to malware from adversaries (Parrish). It would seem account hacking is somewhat correlated with third-party account modification. TheGuardian wrote a story on Chinese prisoners who were actually forced to play this game to turn a real profit through illegal sales (Beijing).  So as you can see, there is definitely a market for the willing adversary.  The focus here is on Blizzard as I am most experienced with their company, it is the biggest, and most newsworthy.  However, security applies to all online games, especially those of the MMORPG variety.  What I aim to discuss is the implementation of what is called a Real Money Auction House, but first I must explain the security measures already in place.

Security Measures

Overall, MMORPG security issues have been growing, forcing companies like Blizzard to come up with ways to counteract them.  “The Mobile Authenticator is an optional tool that offers, the Blizzard game client, account users an additional layer of security to help prevent unauthorized account access” (“ Authenticator”).  The authentication process was needed to help Blizzard deal with the amount of account compromises going on.  Basically what it does is generate a random number, held by Blizzard and the user, which changes every minute allowing only the user to log in (“ Authenticator”). Another security measure taken is the use of spyware like Blizzard’s Warden.  This software takes information from your RAM, hard drive, CPU, IP address, OSes, and others “FOR PURPOSES OF IMPROVING THE GAME AND/OR THE SERVICE, AND TO POLICE AND ENFORCE THE PROVISIONS OF ANY BLIZZARD AGREEMENT” (“World of Warcraft Terms of Use”). Obviously the implementation of these security measures is because of the severity of the problem.  We would expect for companies like Blizzard to continue making games safer, but sometimes money is more important in the end.

Real Money Auction House?

Yes, Blizzard’s Diablo III came with a new, experimental RMAH (Real Money Auction House) which allows users to purchase in-game items on the auction house with real currency.  In an auction house users can purchase anything from equipment to collectables.  With this RMAH, you no longer need to spend countless hours collecting materials for in-game currency to purchase items.  All you would have to do is enter your credit card number and your transaction is processed almost instantaneously.  I believe this was a bit too ambitious for Blizzard as security was already compromised frequently.   “This week, our security team found an unauthorized and illegal access into our internal network here at Blizzard”, taken from the Blizzard website September 2012 (“Important Security Update”). Gaining access to Blizzard’s database would offer an adversary hundreds of account passwords and users’ credit card information.  Blizzard gets a cut from the RMAH, meaning when a player makes a sale, Blizzard takes15% off the top of the sale price (“Diablo III Auction House”). I think it almost goes without saying; the RMAH could be a very lucrative business for a skilled adversary to get into.  It would be easy to modify transactions or redirect funds to new accounts.  I can see countless vulnerabilities this new auction house brings to the online gaming world.  Finding ways in code to repeat a transaction, modify the value of items before or after transactions, rerouting money to different accounts, and simple password theft/account fraud are all examples of problems that could arise.  If the problem gets too bad, Blizzard could lose the trusted fan base they have been working so hard to maintain.  There is a story about a player losing $200 dealing with this RMAH, and the FBI even got involved.  They were able to assist and return the user’s money (Usher). This is just one of many problems this implementation has caused already, and the FBI getting involved is nothing to disregard.  We need to take a look at Blizzard’s perspective to better understand their reasoning behind creating a RMAH.

Blizzard’s perspective is totally profit driven in a sense; however this RMAH does offer a service to players.  Instead of players buying and selling items from third parties, which is usually the main culprit behind compromised accounts, they will buy the items from Blizzard (Heartbourne). When looking at it from this perspective, it doesn’t seem so bad.  This would actually help cut down on account hacking and make Blizzard big profits in the end.  I think using the RMAH as a “security device” is brilliant and could really bring about a new age of gaming, if successful.  I have not found sufficient numbers to determine the success of the RMAH in Diablo III, as sadly I think the game died out much too quickly.  If games continue with this trend, the system could be completely compromised by an adversary getting into the company database.  If they do not implement this, there will still be a demand for purchasing items with real money from third parties (possibly leading to user account exploitation).  It is a tough decision, but I would opt for the RMAH because it has a high profit margin for the company and reduces user attacks.  I would put more resources into keeping my company’s systems secure, whereas I do not have as much control over the user’s account.  All in all, there will always be a market for adversaries in the online gaming realm.  Blizzard will remain a key innovator in the industry and it will be exciting to see if other companies start to follow suit. I would like to hear other people’s thoughts and comments on whether a system such as this is a good or bad idea for the future of online gaming.


Beijing, Danny Vincent in. “China Used Prisoners in Lucrative Internet Gaming Work.” The Guardian. Guardian News and Media, 25 May 2011. Web. 01 Nov. 2012. <;.

Blizzard. ALLIANCE AND HORDE ARMIES GROW WITH LAUNCH OF MISTS OF PANDARIA. Blizzard Entertainment, 04 Oct. 2012. Web. 15 Oct. 2012. <;.

Blizzard. “ Mobile Authenticator FAQ.” Blizzard Entertainment, n.d. Web. 25 Oct. 2012. <;.

Blizzard. “Diablo III Auction House”. Blizzard Entertainment., n.d. Web. 26 Oct. 2012. <;.

Blizzard. “Important Security Update.” Blizzard Entertainment, n.d. Web. 26 Oct. 2012. <;.

Blizzard. “World of Warcraft Terms of Use.” Blizzard Entertainment, n.d. Web. 25 Oct. 2012. <;.

Heartbourne. “Diablo III Real Money Auction House: Analysis of Fees, Market Forces, and Strategy.” N.p., n.d. Web. 26 Oct. 2012. <;.

Parrish, Kevin. “Guild Wars 2 Accounts Hacked Immediately After Launch.” Tom’s Hardware. Tom’s Hardware, 08 Sept. 2012. Web. 20 Oct. 2012. <,17455.html&gt;.

Usher, William. “Gamer Loses $200 Due To Diablo 3’s RMAH Region Restrictions.” Gaming Blend, n.d. Web. 19 Oct. 2012. <;.