The yellow padlock

30 11 2011

Web browsing is a common activity for people nowadays; if we don’t browse to search for something, we do to shop, to check our bank accounts, to send emails, watch videos, etc. The number activities we can do at modern websites is immense; moreover, the tendency is to have even more and more significant activities performed this way through cloud computing and other technologies.

However, in several of these cases the interaction requires some level of security. In above mentioned scenarios we would expect to have some security when we shop, check bank accounts or send emails at least. There are few things that we as users can do on our end to enhance the security like having strong passwords and keeping them safe and look for the “yellow padlocks” in the website if we don’t understand the difference between http and https and look for it in the address bar.

The yellow padlocks exist since early versions of the most popular web browsers and have been implemented in newer versions and newer browsers, in many cases maintaining the symbol of a padlock that although it might have lost its color in some cases it can be displayed as locked or unlocked to remind the user of the security implemented in the server. Some people have been educated to trust blindly whenever they see a closed padlock and don’t even question what is happening behind scenes; in fact from my experience I’ve seen that people many times don’t even know what the yellow padlock really means and the security decisions we are making just by trusting it.

For some people that dig deeper and try to understand, or that have been educated to question (like myself) the findings of the reality of yellow padlocks might be a little bit scary. By this I don’t mean that they provide no security, but only that we should understand what is behind and to not just trust the symbol just because.

In simple terms the padlocks mean that the connection being made between the server and the client supports the implementation of SSL/TLS encryption through the HTTPS protocol instead of HTTP which means that the information sent back and forth could be encrypted.

Moreover, not any website can just show a padlock in the website. In fact, for a site to display a padlock it requires the website to be validated by Certificate Authorities which will issue a certificate that then is used to validate the SSL connections [4]. This presents the first issue we should take into consideration: Do we really trust the certificate authority? These certificate authorities have a process to become one and try hard to maintain its credibility (after all that is what their business is about, to have credibility so you can trust them), however, there has been cases of certificate authorities that lose their credibility because of different reasons; so to what extent can we trust the current ones?

Obviously this is beyond the case when we try to log to a website that has a certificate that expired, is invalid or issued by a certificate authority not previously defined by our browser. Nowadays, browsers come with a predefined set of “trusted” certificate authorities and therefore any website using a certificate from them would be trusted as reliable by the browser [3]. There are cases when the websites use authorities that are not specified in this list, think for example back to the day you logged in for the first time to an HTTPS website hosted at CMU and you probably were prompted about the unknown certificate and if you wanted to trust it or not. Many people just trust the certificates when they appear because they are from sites they want to go to. Anyway, going beyond the user education issue and back to the topic, we need to understand certificate authorities and their reliability according to us and nit just what our browsers predetermined to ensure that the websites are legitimate and trustworthy.

After we defined trust of the certificate authorities, the padlock can then be closed and the connection to the server can be made using therefore SSL/TLS which refers (as expressed before) to a method of encryption and authentication on the World Wide Web. SSL is a protocol that is implemented between the application layer and the transport layer (between HTTP and TCP/IP) and provides for three main aspects [1]:

  • Server authentication
  • Client authentication
  • Data encryption.

The server authentication refers to the aforementioned and is performed by confirming the server’s identity by checking its certificate to be valid and issues by a trusted certificate authority. The client authentication is a similar process in which the server is the one to confirm the identity of the user by a similar process; however, many users do not have certificates and therefore this optional aspect is not implemented in the connection [1].

Data encryption is the third aspect and the second point of concern related to the padlocks because it is an optional requirement in the connection specified as such in the SSL handshake process [1]. Without going into detail of the handshake process, there is a possibility that the session if performed without having an encrypted link. So, how much do we trust now that the connection is secure if it could be not encrypted? The implications of this issue are huge, because the traffic sent and received is then subject to potential attacks or exploits, for example, a simple sniffer tool could read the information being sent and/or received as plain text and use it for malicious purposes or perform man-in-the-middle attacks [2].

In conclusion, despite of how used we are to web browsing and the different activities we do while surfing the web, it is important to understand what is happening behind the scenes. The padlocks are a good example of mechanisms that we are used to seeing every day and trust without question. After more about this, are you curious to see which certificate authorities are you trusting? Would you question yourself next time you see a padlock? I surely hope so.


[1]. Lidner, Martin. Crypto in the real world. From the lectures at Heinz College, Carnegie Mellon University. Course 95-753 Internet Security. 2011.

[2]. Microsoft Corp. Microsoft Security Advisory (2588513) – Vulnerability in SSL/TLS Could Allow Information Disclosure. September, 2011. Microsoft Security TechCenter.

[3]. Naraine, Ryan. SSL Broken! Hackers create rogue CA certificate using MD5 Collisions. December, 2008. ZDNet.

[4]. Instant SSL. SSL Certificate Validation. 2011.

What is the Sandbox and How Do I Get in it?

29 11 2011

Sandboxes are becoming a widely used security mechanism by application and operating system developers to provide a more secure user experience. The term sandbox gets thrown around quite a bit in the security world, but what exactly is a sandbox? What functions does it perform? And does it make us more secure?

A sandbox is a security mechanism that is used to enforce additional segregation of applications. The idea behind a sandbox is that it will restrict an application’s interactions with your operating system and other processes.[1] Therefore, if an exploit is triggered in the sandbox, in theory, it should not affect the parent operating system. In essence, a sandbox is a virtual environment in which an untrusted or potentially vulnerable application can run without affecting the parent operating system or applications. Every major software company including Google, Apple, Microsoft and nearly every major antivirus vendor use sandboxes as a security mechanism. In theory, the use of sandboxes to secure an application or operating system is sound, however reliance on a single security mechanism will always put the end-user at risk.

In recent news, CoreLabs Research identified a “potential security vulnerability” in the sandboxing method utilized in the Max OS X operating system.[2] The vulnerability jeopardizes the fundamental purpose of sandboxes in security, separation. CoreLabs’ proof-of-concept identified that applications downloaded through the Apple App Store could be gain elevated privileges, despite being restricted to a sandbox. The mechanism by which an attacker could gain escalated privileges on an OS X machine requires a fairly unsophisticated mechanism of, well, asking for elevated privileges through an external program. This would also provide the ability for an application without network permissions to gain network access through the parent operating system. Apple had previously stated that all applications in the App Store were required to implement sandboxing by March of next year. Apple reiterated this requirement in response to the identified vulnerability. This is probably a step in the right direction, but does not seem to target the fundamental vulnerability. Apple must first identify a mechanism to effectively separate their App Store applications from the underlying OS X operating system. Their sandboxing requirement does not address this vulnerability as noted by CoreLabs.[3]  Wil Shipley, a Mac OS X developer, noted that sandboxing on the OS X desktop platform presents additional challenges than on their mobile iOS operating system noting that “if there’s a hole anywhere in it [desktop sandbox] that malware authors find, then there’s really not much Apple can do until they issue a full operating system patch.”[4]

While the Apple sandbox issue is certainly newsworthy, and further highlights that Macs are as vulnerable as PCs to many classes of attacks used by malware authors, similar vulnerabilities have been identified in Google’s Chrome web browser[5], Microsoft’s Internet Explorer[6] and Mozilla’s Firefox[7] through incorrectly implementing sandboxes. Doug Dineley explains one of the fundamental problems with relying a sandbox for application or browser security in a recent InfoWorld article. Dineley notes that a core vulnerability with a sandbox lies in when it needs to call an external program, such as Adobe Flash. In order to call that application, the sandbox must interact with the parent operating system, which is an area where attackers can exploit the sandbox and access information on the local computer outside of the sandbox’s virtual environment.[8]

This demonstrates that sandbox vulnerabilities are not unique to one application or one implementation method. The use of a sandbox is a sound security practice as it minimizes the resources accessed by an application, however it should be clear that it may still rely on some operating system interaction to complete the overall task. While using a sandbox as a security mechanism is a sound practice, it should not be relied upon as the sole solution to segregating malicious software from the parent operating system or parent application. While sandboxes can be helpful in providing a more secure user experience, it is still the responsibility of the end-user to ensure they are visiting trusted sites. End-users should take steps to minimize their use of external scripting languages, such as JavaScript, which is a common mechanism for browser sandbox subversion. In addition, software developers should make use of the sandbox to more effectively protect the end-users through minimizing their calls to external programs, their reliance on external scripting languages and more effectively safeguarding users through effective sandbox implementation.

[1] Goldberg, Ian, David Wagner, Randi Thomas and Eric Brewer. “A Secure Environment for Untrusted Helper Applications (Confining the Wily Hacker)”. USENIX. July 1996. Sixth USENIX UNIX Security Symposium. Nov 13 2011 <;

[2] Foresman, Chris. “Mac OS X has its own sandbox security hole.” Ars Technica. Nov 2011. Web. Nov 13 2011 <;

[3] Ducklin, Paul. “Apple’s OS X sandbox has a gaping hole – or not.” Naked Security. Nov 2011. SophosLabs. Nov 14 2011 <;

[4] Foresman, Chris. “Mac OS X has its own sandbox security hole.” Ars Technica. Nov 2011. Web. Nov 13 2011 <;

[5] Schwartz, Matthew J., “Hackers Subvert Google Chrome Sandbox.” Information Week. May 2011. Web. Nov 13 2011 <;

[6] “Microsoft Security Program: Frequently Asked Questions: Microsoft Security Bulletin (MS99-031).” Microsoft Security TechCenter. Web. Nov 14 2011 <;

[7] “Critical Vulnerability in Firefox 3.5 and Firefox 3.6.” Mozilla Security Blog. Oct 2010. Web. Nov 13 2011 <;

[8] Kaneshige, Tom. “Does sandbox security really protect your desktop?” InfoWorld. Jun 2008. Web. Nov 14 2011 <,1&gt;

Identity Theft – Credit Cards

29 11 2011

Identity theft is the activity in which someone obtains your personal information and uses it in ways to impersonate you, usually for some economic gain. It includes getting your name, address, telephone number, credit card information etc. and using it to impersonate you.

Application Fraud: – This happens when criminals, use personal information and documents to open accounts in someone else’s name. They can get documents by going through dumpsters. We throw out a lot of documents like utility bills and bank statements without shredding them. Documents that contain our name and address on them. Using documents like these criminals can impersonate us.

Skimming: – This is the process in which criminals, read the information stored in the magnetic strip of credit cards and they can use this information for their own economic gain. Criminals usually install devices at ATM machines or self serve gas pumps. These devices called skimmers, read the information on the magnetic strip of your card. These devices are inconspicuously installed on the machine. Along with these devices, criminals could also install hidden cameras at ATMs to get the 4-digit pin the user is entering. Skimming could also occur at restaurants where a waiter takes you card when you pay the bill, he could pass the card through a skimmer and then give the card back without the person ever suspecting any fraud.

Another way to get your credit card information is through phishing attacks. Criminals send emails pretending to be from legitimate financial institutions. They ask for your personal information and card details for verification. Unsuspecting users give away this information thinking the mail has come from a legitimate source. Call centers are another place where credit card information might be stolen. You call your credit card helpline and they ask you for your card number and other personal information and we give it away. The person at the call center can collect this information and sell this to criminals.

Another very simple way in which criminals can get access to your personal information is at public places. Assume you are sitting at a coffee shop and shopping online. Someone can look over you shoulder and gain access to your personal information as you are entering it.

Few precautions against identity theft: –

  • Do not give out personal information over the phone or when you get emails asking for them.
  • Always be aware of the surroundings, make sure no people are around you when you enter your pin at the ATM or when you do online transactions. Make sure the ATM machines do not have any strange devices attached to them.
  • Always shred and dispose all receipts and bills and other documents with personal information on them.
  • Do not leave documents with personal information on your desk at office.



Is Duqu Looking to Build Off of Stuxnet’s Success?

21 11 2011

In October of 2011, a laboratory notified the Symantec Corporation of a piece of malware that had some similarities to the Stuxnet worm that gained worldwide attention in 2010. Given the massive attention and allegations that the Stuxnet worm was a state funded operation, Symantec and other security experts began launching a full investigation into this new piece of malware. The malware was eventually given the name Duqu as a result of the software creating files with the prefix “~DQ” on an infected machine.

Before we delve into the details uncovered of the Duqu Trojan, let’s take some time to refresh our memories on what Stuxnet was and how it operates. Stuxnet is a computer worm that was designed to infect Siemens industrial software to disrupt the centrifuges used to enrich uranium. Stuxnet would rapidly increase and decrease the speed of the nuclear centrifuges in order to cause mechanical failures in the industrial equipment. While Stuxnet was doing its duty on the centrifuges it would also send back false information to the monitoring systems so the human operators would have no idea that their equipment was about to fail. It should be noted that while there is no way to be 100% sure, it is widely accepted that Stuxnet was targeting nuclear centrifuges in Iran. The beauty of the Stuxnet worm was that while it infected Microsoft Windows computers it was not designed to negatively impact the client. Unless it was determined that the host was used a piece of Siemens equipment used in nuclear plants, Stuxnet remained relatively dormant. Furthermore, (and even more staggering) was that Stuxnet’s target was a closed system. The worm couldn’t simply access its target via a network connection. So, it was sent out in the wild to infect as many Windows machines as possible with hopes that it was land on a specific laptop that would occasionally be connected to a Siemens PLC and from there it could start working on the centrifuges.

Stuxnet used Adobe PDF files and removable media such as flash drives to infect clients and once on a system used peer to peer connections to propagate itself across a network. Stuxnet was able to exploit four different zero day flaws in Windows to inject a driver into the operating system kernel. This technique is often typical in rootkits which allows the malicious software to operate outside of the realms of typical malware enabling the rootkit to hide itself or resist removal from anti-virus software. A rather brilliant aspect of Stuxnet was its use of digital signatures.  The Stuxnet software, in particular, the kernel driver was signed using a software signing certificate which gave the software a bit of inherited credibility due to trust chain of signed code.

You may be thinking “Ok…well why was Stuxnet so popular? The targets were such a small subset of the global computer world”. That is exactly why it was such a hot topic in the IT Security circles. Stuxnet was the Wayne Gretzky of malware it changed the way the game was played. Stuxnet was the first piece of malware to specially target an industrial asset and therefore single handedly changed the entire threat landscape for security professionals. Now your security stance needs to address industrial control systems as well as your computer systems. Another daunting thought was that the Stuxnet infection was so large it could have had devastating success of negatively impacting clients if they were the desired target.

Enter Duqu.

Duqu is a combination of malicious files that ultimately work together to exploit a specific target. Duqu, like Stuxnet, exploits a zero day flaw in Microsoft Windows to inject a digitally signed kernel driver into the operating system. The malicious driver will then launch a series of DLLs which in turn load a Remote Access Trojan (RAT) onto the infected client. A remote access trojan is malicious software that allows the operator of Duqu to gain information about the client remotely. In addition to the trojan malware Duqu also implement a key logger on the infected machine which will log the keystrokes entered on a client and then ship those logs off to the threat actor. Unlike Stuxnet the actual infection methods of Duqu are unknown as the initial installer or dropper is removed from the client once infected.  Also unlike Stuxnet, Duqu does not appear to be targeting industrial systems like PLCs. Instead, the end goal is to provide the attacker remote access to a client machine to gain information.  In that brief summary you can gather that at face value there appear to be some links between Duqu and Stuxnet. Both pieces of malware use a zero day exploit to inject kernel drivers into the operating system as a rootkit to hide files and possibly for persistence. They both also used digital signed code in their malware.

Here is a great table from Dell comparing some of the major aspects of Stuxnet and Duqu

Credit: Dell Secure Works “Duqu Trojan Questions and Answers”

However as I mentioned earlier, when first reported many security professionals were quick to label Duqu the “Son of Stuxnet”. There was additional speculation that Duqu was written and launched by the creators of Stuxnet or that it was the next evolution in the Stuxnet infection. However, there has been a shift in these speculations recently stating that the similarities while present don’t necessary provide enough evidence to say without a doubt that they are from the same actors. Dell Secure Works has stated that “One could speculate the injection components share a common source, but supporting evidence is circumstantial at best and insufficient to confirm a direct relationship. The facts observed through software analysis are inconclusive at publication time in terms of proving a direct relationship between Duqu and Stuxnet at any other level” ( . I should also mention that the Symantec Corporation has also done extensive research into the Duqu Trojan and they have stood by their initial assessment that Duqu is strongly related to Stuxnet and is likely the work of the same attackers.

In my opinion, I am more likely to side with Dell’s conclusion. I feel that Stuxnet was so widely researched and so much knowledge is available on the internet about the fundamental operations of Stuxnet it is within reason to think that someone could have used some portions of Stuxnet to create Duqu while not being involved in the Stuxnet operation. After all, Stuxnet has been completely reverse engineered and the source code is available for download. Whether or not it is related, it appears that Duqu appears to be a very specific attack and if you are in the cross hairs, you should be paying attention.


Dell Secure Works Duqu Trojan Questions and Answers

What is Duqu Up to

Cyber Warfare: A different way to attack Iran’s reactors

Same authors created malware that infected nuclear facilities?

Spotted in Iran, Duqu may not be “son of Stuxnet” after all


Stuxnet Dossier

Smartphone Security Revisited

20 11 2011

by Shanief Webb

In my previous blog post[1] I introduced the threat of mobile applications stealing personal user data from smart phones that they are installed on. However, at that time, I only focused on the Android mobile Operating System. Now, I’d like to focus on a similar situation but with mobile applications on Apple’s iOS.

TG Daily[2] reported today that Charlie Miller released submitted a malicious app to Apple’s app store. Unlike with the Android Market, Apple actually has someone review each app before it is allowed to enter the App Store. Although Apple has already removed the app from the App Store, we wonder how can a malicious app get through the cracks?

For Charlie Miller’s malicious app, it probably wasn’t human error that allowed his app entry into the app market, but instead “good” design. Apparently, Chris made an app that appeared to be doing something useful such that it would not have raised any eyebrows. Specifically the app appeared to be an app that monitored the stock market, but secretly was making the device(s) it was installed remote-controllable. Miller’s application was similar to the Android app I worked on in the Spring 2011 (noted in my previous blog post) and I should note that TG Daily claims that Miller published his app for experimental reasons (not to be truly malicious).

There have been several malicious apps similar to this have been released in Google’s Android market and for that reason I am surprised that Apple was not aware and cautious enough of these kinds of apps to train their reviewers to look for possible malicious behavior in the background activity of the applications. Furthermore, on average Apple takes about a month before they give an approval/disproval decision on an app’s entry to the App Store so, I doubt that Apple was simply hurrying to get the app out of the approval process.

In Apple’s App Store Review Guidelines[3], a “living” document, Apple describes some scenarios of what can cause an app to fail the review process. There are a few relevant scenarios that apply to Chris Miller’s app:

  • “Apps that do not perform as advertised by the developer will be rejected”
  • “Apps that include undocumented or hidden features inconsistent with the description of the app will be rejected”
  • “Apps that read or write data outside its designated container area will be rejected”
  • “Apps must comply with all legal requirements in any location where they are made available to 
users. It is the developer’s obligation to understand and conform to all local laws”

(Apple App Store Review Guidelines)

The last bullet encouraged me to look into Apple’s Privacy[4] policy to see what Apple guarantees their customers in terms of personal information on their mobile devices. I found that personal information shared on some Apple products is “visible to other users and can be read, collected, or used by them“ (Apple Privacy Policy) and that Apple expects users to be cautious of the information they share. In other words, Apple doesn’t provide much protection for their customer’s information and holds their customers liable for any theft or misuse of their personal information. Similarly Google’s Privacy Policy[5] for mobile devices has a conceptually identical clause “If you decide to use third party applications on your device, any information those applications collect may be sent to third parties and the Google privacy policies do not apply. “ (Google Mobile Privacy Policy)

Personally, I do not like the fact that Apple and Google hold their customers liable for their personal information on their mobile devices, but at the same time I respect their stance because they also open their app markets up to third-party developers and are not fully aware of all the damage malicious developers could potentially cause.

Potential to Create Forged SSL Certificates with the MD5 hash function

19 11 2011

The MD5 hash function has been considered to be a weak algorithm. This has been found as far back as 1996, when the weaknesses in MD5 were exposed by researchers [1]. Since then, there have been numerous articles about collision attacks on MD5 and it is clear that MD5 has been considered to be broken.

The vulnerability of MD5 has made it possible to carry out a MD5 collision attack to create fake SSL certificates. Previously, this attack was dismissed as being only theoretical but researchers demonstrated the first known application of such an attack in December 2008, running the attack in 4 weekends using a network of 200 PS3 game consoles at a cost of only $657 [2]. The attackers estimate that the same amount of processing power could be purchased from Amazon at a cost of about $1,500. The attack works by allowing an attacker to appoint itself as an Intermediate Certificate Authority (CA), and to then generate trusted certificates which the real CA does not know about [2].

While CAs today have stopped issuing new certificates based on the MD5 hash function, it would be possible that an attacker could have made use of this knowledge immediately after the vulnerability of MD5 was made known in December 2008 to obtain rogue CA certificates. Also, it is probable that there were others who knew about this vulnerability before it was publicly announced and made use of it in the same way. What an attacker could have done was to have the CA sign a non-CA certificate that was in collision with their rogue CA certificate, get the non-CA certificate signed and apply the signature to the rogue CA certificate [3]. Those rogue CA certificates would then have the ability to sign additional certificates on any domain [4]. Carried out with DNS spoofing, a rogue CA certificate could be used to impersonate a legitimate website with the browser showing that it is a secure connection. The possibilities are many, from impersonating banking websites to e-commerce websites to email websites to get passwords and credit-card numbers.

Now going forward to our current moment, we have already experienced a successful attack on SSL. In June 2011, hackers broke into DigiNotar’s systems to create forged certificates for the Google domain name, and those fake SSL credentials were used to spy on 300,000 Iranian internet users [5]. Even worse, DigiNotar only revoked the certificate for at the end of July and only went public a month later [5]. What that was once deemed highly unlikely – an attack on the root CA was successfully accomplished. Learning from this incident, it could be highly possible for unlawful individuals or governments to already possess such rogue CA certificates, so as to carry out an impersonation attack at an opportunistic time. Also, our computing power has improved tremendously from December 2008 to now. Carrying out a collision attack on MD5 would now require lesser amounts of computing time. Moreover, the DigiNotar incident showed that it is in the interest of governments to create forged certificates; governments have access to much more resources as well as computing power to carry out an attack.

A check on the list of root certificates accepted by Microsoft showed that there are still 39 root certificates based on the MD5 signature hash that are accepted [6]. The reason given was “to allow for certificate chain building for previously signed code and certain SSL-protected websites [6].” Mozilla on the other hand has stopped accepting intermediate and end-entity certificates that are based on MD5 as a hash algorithm [7]. To test this out, I tried to access the website that was created to demonstrate the rogue CA certificate created from the MD5 collision attack, available at I managed to successfully access the website using Microsoft Internet Explorer and had the SSL padlock icon on it (after changing my system time as the rogue certificate was intentionally crippled to prevent it from falling to the wrong hands). However, when I tried using Mozilla Firefox, I could not access the page and had an error message stating that the certificate had an invalid signature. As such, up to today, Microsoft Internet Explorer users are still vulnerable to this attack.

To close the security loophole, Microsoft should follow Mozilla’s lead to reject certificates that have MD5 as the hash function. While this may mean incompatibility problems for some SSL-enabled websites, it would force those websites to change their certificates to have a minimum of a SHA-1 hash function. Microsoft should not wait till an attack happens before it does so; the DigiNotar incident has already shown that a successful attack on the SSL mechanism is not impossible.


[1] Kerner, S.M. (2004). MD5 Flaw Threatens File Integrity. Retrieved from

[2] Corelis, T. (2009). MD5 Is Officially Insecure: Hackers Break SSL Certificates, Impersonate CA. Retrieved from

[3] Adams, M. (2009). SSL MD5 PKI vulnerabilities threaten Web security. Retrieved from

[4] Edge, J. (2009). SSL Certificates and MD5 Collisions. Retrieved from

[5] Leyden, J. (2011). DigiNotar goes titsup: Disgraced certificate firm is sunk. Retrieved from

[6] Albertson, T. (2011). Windows Root Certificate Program – Members List (All CAs). Retrieved from

[7] Mozilla. (2011). Dates for Phasing out MD5-based signatures and 1024-bit moduli. Retrieved from

Smart Grid

19 11 2011

The American power grid is ancient, much of it was built in a time before the microchip. In comparison the need for more electricity has sky rocketed, with most Americans powering their computers, smart phones, and other electronic devices, we are relying on an old technology to carry us forward3.  In order to fix this situation the smart grid has become a hot topic.   The smart grid is an attempt to more intelligently provide from the suppliers to the consumers.  The grid analyzes and predicts where and to whom the electricity need to go.  By placing small remote control computers on wires, substations, transformer, switches, and meters, all of these devices can talk together to provide energy where it is needed when it is needed.

The smart grid is a way to help improve both the power companies and the consumers.  With a smarter grid the power companies will be able to see where power outages occur, and where they have weaknesses in their systems.  With the current way of doing things power companies don’t even know there is a problem until a customer reports it.  On the consumer side, with a smart grid people will be able to see when and where they use power.  They can more easily see what devices take up the most power and how they can cut back their electricity usage and their electricity bill2.

When power companies place all these devices on the network and have them all talk to each other there are some major security concerns that arise.  The first security problem is the radio frequency communication that the devices use.  An attacker could access this wireless information and monitor the traffic, additionally they could insert their own data and change how power is distributed throughout the network.  The attacker could also stop a node from receiving or sending data, and could this type of attack shut down a network?1  All of these very important questions need to be asked, and the power companies that are implementing these systems need to have a plan.

Another point of weakness are the devices or meters in an individual homes.  If an attacker had access to the usage of power in a home, they could tell when people were home, when they were at work, or if they went on vacation.  Having this personal information sent to the power company to better supply you with a service is great, but what about the security risks of sending that information over an unsecured network.

There are countless ways that an attacker might try and gain access to information that could be sensitive.  The real question is what are the power companies doing to protect themselves and the consumer.  When the smart grid was first initialized and began deployment little consideration was put into the security.  Now that it was been in the field and attackers have gained access to sensitive information , these companies are starting to put security in place.

The important take away point here is that when organizations, be they corporate enterprises or government bodies, they need to think about the security of what they are trying to do.  When they were developing the smart grid the main points were probably, “look at all the neat things we could do with this”, and not many people were saying we need to look at the security implications of what we are attempting.  When groups like Anonymous or lolzsec declaring war on different organization, companies must have security of their infastructrure and their consume at the front of their minds.


1.Lafferty, Shawn, and Tauseef Ghazi. “The Increasing Importance of Security for the Smart Grid – Utility Automation/Electric Light & Power.” Electric Transmission, Distribution, Generation Power Grid Technology: POWERGRID Intl – Electric Light and Power. Web. 08 Nov. 2011. <;.

2. “Smart Grid | Department of Energy.” | Department of Energy. Web. 08 Nov. 2011. <;.

3. United States. Department of Energy. The Smart Grid: An Introduction. <;.