Security in the Era of Big Data

10 10 2012

Big data has become a buzzword lately. Companies see it as a lucrative technology that turns a massive amount of data from both online and offline sources into useful information to predict behaviors and trends.  When companies talk about big data, volume, velocity, variety, and veracity usually comes to their mind [1]. Security, however, is still an afterthought for many enterprises. In this article, I will discuss the security consideration of big data, especially the concerns related to NoSQL.

Before discussing the security issues, let’s take a quick look at big data and NoSQL. As I mentioned earlier, big data is not a matter of size. It is about 4 Vs: volume, velocity, variety, and veracity [1].

  • Volume: companies may deal with terabytes or even petabytes of information [1]
  • Velocity: how long it takes to translate big data to information? Is it in real time? [1]
  • Variety: both structured and unstructured data, such as data from sensors, videos, audios, social media sites, cellphone signals [1]
  • Veracity: what if you don’t trust your data source? How do I deal with untrusted data? [1]

Due to the complexity of big data, our traditional database management system that stores structured data with the relational database may not be suitable anymore when considering big unstructured data. For example, companies need to think about how do we deal with data such as graphs or audio that does not fit into rows and columns in relational database. As a result, instead of using SQL language, NoSQL has been used by many big companies such as Google and Amazon for storing both structured and unstructured data [2].

What is NoSQL?

NoSQL standards for “not only SQL” or “not relational” [3]. There are 6 key characteristics of NoSQL [3]. However, for the purpose of our discussion, I would only focus on three main features:

1)    Horizontal scalability [3]

2)    Data replication and distribution [3]

3)    Simple call level interface [3]

4)    Weaker concurrency model than ACID properties [3]

5)    Efficient use of distributed indexes and RAM [3]

6)    Flexible schema [3]

Horizontal scalability is probably one of most well known features of NoSQL. It basically means NoSQL has the ability to distribute data and workload over multiple servers instead of improving a single’s capability [4]. The economic benefit of it is reducing costs of upgrading one single server by using multiple relatively inexpensive pieces of hardware [4]. This also reduces the single point of failure (or the bottleneck) and increases availability and velocity [4].

Data replication and distribution goes along with horizontal scalability. Data can be replicated and distribute over different severs, which allows faster and more efficient operations [3].

Flexible schema is the major innovation of NoSQL. The traditional relational database requires us to define the table structure, while NoSQL allows us to add new attributes to data records dynamically without defining a fixed table schema first [3]. It also allows different kinds of data types such as graphs and documents [3].

There is a huge debate between SQL and NoSQL and which is better. Some people argue that SQL works just fine because most companies do not need to handle data that is as big as Google [2]. For the purpose of this paper, I would not focus on the debate. Rather, I would like to address some security concerns related to NoSQL.

Flexible schema:

Since NoSQL allows adding attributes to the data records dynamically, a forward-looking security mindset should be established [5]. That means before we add the attribute, we need to understand what happens with this new attribute, what are the security concerns of this new attribute, what privileges should be granted to this new attribute [5].

Data distribution/dispersion

Unlike a relational database, which only allows one piece of data stores in a single location to maintain data normalization, big data allows different data stores in different servers [6]. Therefore, it is harder to locate data and maintain its confidentiality [6].

Still a new model

Since NoSQL is a relatively new technology, security is not a build-in feature yet. Of course, it does not mean that the traditional relational database is vulnerability free. However, since most of those vulnerabilities are well known by the public, many countermeasures and policies have been effectively implemented to prevent it from happening. For example, SQL has already had a strict access control and privacy tools, NoSQL, on the other hand, does not put much emphasis on it yet [6]. Similar to SQL, NoSQL is subject to the vulnerabilities of input validation (such as a potential threat of NoSQL injections), weak authentication and unencrypted data [6].

The People

Most people who are working on NoSQL are new to this technology as well. They need to spend most of the time to understand the technology and make it work [6]. So security certainly is not in their considerations.

Big data, Big target

The companies who are dealing with big data by using NoSQL are more likely to be an attractive target of malicious parties. It is because 1) attackers know that those companies have a large amount of high quality data 2) NoSQL is not in its mature stage yet and security is not being well considered. Attackers are more interested in finding new vulnerabilities and catch companies off guard.

Privacy

The earlier news about how the big-box retailer Target figured out a girl’s pregnancy before his father did proves how company breached privacy by data mining its customers. Many users are using multiple online identities as a way to prevent themselves from being associated with their real identity. However, with the ability of associate data from different activities from different systems, they can consolidate our information together easily [6]. That greatly reduces users ability to prevent them from being tracked by companies [6].

In conclusion, NoSQL is still a new technology and there are many security issues associated with it. Although it has been implemented by technology pioneers like Google, Amazon and Yahoo!, we are still unsure whether or not it is a fad or a future data management trend yet. For most companies, they should be cautious when considering dealing with big data by using NoSQL. Moreover, security should not be an afterthought for NoSQL because it is inherently more risky than traditional SQL as its ability to add new attributes dynamically.  Thus, a forward-looking security is necessary if we are moving to the NoSQL stage.

___________

  1. What is big data?. IBM, Web. <http://www-01.ibm.com/software/data/bigdata/&gt;.
  2. Cogswell , Jeff . SQL vs. NoSQL: Which Is Better?. 2012. Web. <http://slashdot.org/topic/bi/sql-vs-nosql-which-is-better/&gt;
  3. Cattell, Rick. Scalable SQL and NoSQL Data Stores. 2011. Web. <http://cattell.net/datastores/Datastores.pdf&gt;.
  4. Keller, Eric . Horizontal Scalability. BitLancer, 2012. Web. <http://www.bitlancer.com/blog/2012/08/horizontal-scalability/&gt;
  5. Chickowski, Ericka . Does NoSQL Mean No Security?. Dark Reading, Jan 11, 2012. Web. <http://www.darkreading.com/database-security/167901020/security/news/232400214/does-nosql-mean-no-security.html?itc=edit_stub&gt;.




Bank compromises

10 10 2012

The world as we know it is getting more complex and more dangerous. There are security issues, flaws and problems that bad people try and take advantage of. In the cyber realm there are constant ploys to infiltration and exploit systems either for person gain or for kicks. As you have noticed there have been several DDoS attacks on banks in the previous weeks, making it hard for customers to access their accounts. With this type of problem occurring right now, it’s only a matter of time before some other type of attack comes into fruition.

An article posted on scmagazine.com by Dan Kaplan stated that, “Security researchers at RSA have been warned that a sophisticated attack is being hatched in order to raid customer’s bank accounts at some 30 banks in the U.S.” (Kaplan, 2012). The news surfing around is that a Russian Cyber gang is planning an attack to utilize consumer’s computers and process unauthorized wire transfers. In order for this attack to happen the Gozi Prinimalka gang (named after the Trojan that will cause the mayhem) will call on the help of supporters (botmasters) to manage and implement attacks from infected Trojan computers. In the article is goes on to say that the botmasters who will be trained to perform the MITM attack won’t be given access to the code of the Trojan. This will ensure that the botmasters rely on the Gozi Prinimalka gang. The plan seems as though it is well thought out and can pose a serious threat to banking customers.

These attacks will occur when “the attacker uses a virtual machine-synching module to mimic the victims IP address while accessing the targeted bank account. In addition the ring will utilize phone-flooding software to prevent the victims from receiving bank notifications of usual money transfers.” (Kaplan, 2012) Most banks are going to be targeted due to them not having enough security authentications to providing the proper access need without being hacked. If this takes place accounts will be hacked and it will cause problems. I recently received a phone call last week stating that my bank account had been hacked and that they were going to issue me a brand new card without any charges. This made me think about all the millions of people that can and will be affected if this happens.





Anti-detection and anti-analysis techniques of modern malware – are they stumping security researchers?

4 10 2012

Modern malware have been evolving to such an extent that it is difficult for security researchers to keep up with them. Most anti-virus solutions depend largely on signature-based scanning and some form of heuristic-based detection. However, there is no doubt that creating signatures for every single malware is similar to a catching game where the good guys are always behind. The total number of unique malware variants jumped from 286 million in 2010 to 403 million in 2011 [1], a staggering 40% increase. For every new virus or worm that is discovered, many machines would have already been infected while the anti-virus vendors are still frantically trying to analyze the malware and release new definitions to their customers.

Malware writers are aware of this fact and have been incorporating advanced techniques to further delay the detection and analysis of their malware. One of the most commonly used methods is polymorphism, whereby the malware constantly mutates its own code to bypass signature-based anti-virus scans, but still retaining the malicious payload. This is typically achieved through encryption with varying keys, compression and filename changes [2]. Although polymorphic malware have been around since the early 1990s, the good news is that most modern anti-virus software can detect them with a decent probability [3] – usually by looking at the portion of the malware that does not change. However, cybercriminals have already found a way to bypass this – through a variation called “server-side polymorphism”. In this case, the malware is hosted on the attackers’ server, allowing them to generate a unique version for every potential victim each time the malware is downloaded. [4] This essentially renders pattern-matching useless since no two version of the malware is the same.

More alarmingly, a new form of attack called Domain Generation Algorithms (DGA) has manifested itself in new-generation malware. Popularized by the Conficker worm in 2008, DGA malware contains code that allows it to receive commands from remote locations. [5] Each day, the malware will generate a new list of domain names and try to contact all these locations for an update. Since the malware author knows the algorithm, all he needs to do is register one of the domains and host the update on that site. This makes the job of cyber law-enforcement officers extremely difficult since they have to shut down all possible domains to prevent the update while the attacker only needs to use a single domain. [6] Furthermore, signature-based detection will be irrelevant since the update from the remote site will be able to modify the malware source-code and behavior. [7]

In response, malware researchers have turned to advanced techniques to analyze modern viruses. This includes sandbox testing, emulation and using virtualization technologies. They confine the malware to a restricted environment so as to limit its actions, allowing a more effective analysis and reverse-engineering of the malware.

Unfortunately, it seems that even so, cybercriminals have found ways to circumvent the analysis techniques of researchers. Recent years have seen a rise of anti-virtual machine malware (or VM-aware malware), which can distinguish whether it is present in a virtual machine or a real environment. If such malware recognizes that it is being run in a VM, the malware will feign benign behavior and not release its payload. [8]

Like a cat-and-mouse game, the white hat community has yet again formulated possible solutions against such malware. In the recent 2012 BlackHat Conference, three researchers presented their findings on anti-VM, anti-debugging and anti-disassembly techniques used by malware. [9] They analyzed more than four million malware samples and in doing so, created a malware sample database with an open architecture. This allows other researchers around the world to see the results of the analysis, as well as develop and plug-in new analysis capabilities.

Some experts have also claimed that the more evasive a malware tries to become, the greater chances that it will gain unnecessary attention due to its over-innovative methods. [10] This is because even with anti-VM and anti-debugger features, researchers thus far have still been able to bypass the evasion techniques and deconstruct such malware.

However, it is only a matter of time before malware authors invent new ways to work around the efforts of security researchers. It is true that the white hats might have the right solution for a while, but there can be no silver bullet for malware, or any issue pertaining to the IT security industry. The only solution is to be ever vigilant and be as agile and adaptive as the black hats are.

___________

[1] Symantec Internet Security Threat Report 2011. Rep. no. 17. Symantec, Apr. 2012. Web.

[2] Rouse, Margaret. “Polymorphic Malware.” SearchSecurity.com, Apr. 2007. Web. 04 Oct. 2012. <http://searchsecurity.techtarget.com/definition/polymorphic-malware&gt;.

[3] Cluley, Graham. “Server-side Polymorphism: How Mutating Web Malware Tries to Defeat Anti-virus Software.” Naked Security, 31 July 2012. Web. 04 Oct. 2012. <http://nakedsecurity.sophos.com/2012/07/31/server-side-polymorphism-malware/&gt;.

[4] See [1].

[5] Markoff, John. “Worm Infects Millions Of Computers Worldwide.” The New York Times. The New York Times, 23 Jan. 2009. Web. 04 Oct. 2012. <http://www.nytimes.com/2009/01/23/technology/internet/23worm.html?_r=0&gt;.

[6] Constantin, Ucian. “Malware Authors Expand Use of Domain Generation Algorithms to Evade Detection.” IDG News Service, 27 Feb. 2012. Web. 04 Oct. 2012. <http://www.pcworld.com/article/250824/malware_authors_expand_use_of_domain_generation_algorithms_to_evade_detection.html&gt;.

[7] Ollmann, Gunter. “Domain Generation Algorithms (DGA) in Stealthy Malware.” Domain Generation Algorithms (DGA) in Stealthy Malware «. Damballa, n.d. Web. 04 Oct. 2012. <https://blog.damballa.com/archives/1504&gt;.

[8] Sun, Ming-Kung, Mao-Jie Lin, Michael Chang, Chi-Sung Laih, and Hui-Tang Lin. “Malware Virtualization-Resistant Behavior Detection.” 2011 IEEE 17th International Conference on Parallel and Distributed Systems (2011): 912-17. IEEE. Web. 4 Oct. 2012.

[9] Branco, Rodrigo Rubira, Gabriel Negreira Barbosa, and Pedro Drimel Neto. Scientific but Not Academical Overview of Malware Anti-Debugging, Anti-Disassembly and Anti-VM Technologies. Tech. Qualys – Vulnerability & Malware Research Labs, 2012. Web. 4 Oct. 2012. <http://research.dissect.pe/docs/blackhat2012-paper.pdf&gt;.

[10] Mushtaq, Atif. “The Dead Giveaways of Vm-Aware.” FireEye, 27 Jan. 2011. Web. 04 Oct. 2012. <http://blog.fireeye.com/research/2011/01/the-dead-giveaways-of-vm-aware-malware.html&gt;.





Bitcoin: Has the Currency Renaissance Begun?

3 10 2012

The first time I heard of a Bitcoin, I was sitting at my kitchen table when my boyfriend, who has his bachelors in Computer Science/Information Systems, said to me “so, have you ever heard of a Bitcoin? I think you’d find fascinating because your undergrad was International Relations and now you’re at Carnegie Mellon.” I brushed it off initially because the terms “hashes, blocking algorithms, SHA256, and Internet Relay Chat” were not yet part of my repertoire. After disregarding the topic, then arguing about malevolent uses of the currency, problems law enforcement must face, and the Silk Road (which I do not recommend to anyone, for the record.), I had realized that he was absolutely right. Just like I am making my transition into IT, the Bitcoin is making its transition into policy, and yes, I find it fascinating.

Crypto currency, sometimes referred to as digital currency, is a very exciting and relevant concept. Rooted in cryptology, mathematical algorithms, and open source software, creating digital money certainly finds its home in IT specialties, but its influence reaches into the social sciences and economics. The most popular crypto-currency is the Bitcoin, which allows anyone around the world to configure his or her machines to buy, sell, and trade digital cash. Those who have Bitcoins use them like cash in any traditional sense when buying a good or service; there is no paper trail, no confirmed identity of the buyer or seller, no using the same coin twice. The process is complex, which is why I would argue that it has not yet hit the critical mass needed to be successful.

Is it the next wave of how we view currency as a society? Maybe. If and when a crypto-currency does become easy and accessible, will it make an impact? Absolutely.

What are Bitcoins, and How Are They Created?

A man named Satoshi Nakamoto is acknowledged as the creator of the Bitcoin when he published a cryptology paper on an online database outlining a new, digital currency, which solves for the issues that many faced in the past. He insured that the information remains secure, one coin is not spent more than once, and a finite amount is created.[i]

To solve the problem of security, Nakamoto implemented asymmetric key cryptology that gives users a public and private key, which can be used to sign transactions, and maintain the integrity of the exchange. Additionally, the keys preserve the identity of the buyer/seller, and keep the information sent between the two confidential. The coin is hashed using a double SHA256 algorithm, and is transferred over the Internet using OpenSSL protocol[ii].

Users receive coins in two ways: they can be bought and sold on exchanges, or traded directly from person to person. The buyer or seller configures a virtual wallet, which houses the virtual coins, and registers on an exchange, the most popular being mtgox.com. In an exchange, Bitcoins are bought and sold by translating traditional currency into Bitcoin. To trade from wallet to wallet is trickier and requires a more tech-savvy user. Additionally, once a coin is spent, or traded from one wallet to another or over the exchange, it is broadcast across the network ensuring it cannot be spent again[iii].

Although the transaction and coin itself are encrypted, it is uncertain whether the wallet itself comes encrypted, too. From my understanding, a person must configure his or her own wallet to security settings that he or she chooses. This may be due to the tradeoff of functionality and security.[iv]

To maintain a finite number of coins, Nakamoto created a cryptographic, blocking algorithm that “mines;” for every time there is a solution to the algorithm, a miner, who can be any user with enough processing power, is awarded a batch of 50 coins. This algorithm causes the coins to reach a limit over time, and stops the production by 2030 at 21 million coins.[v]  These coins are pumped into the marketplace, and are bought and sold resembling a currency exchange for the other users to invest. The system is peer-to-peer and fully decentralized; it cuts out government and the banking system, the traditional places of where currency is created, lent, and traded.[vi]

Traditional Currency, How Does It Work?

Monetary policy is a study all in its own, and takes many years to fully understand it, but this is how currency works at a very high, broad level. In a nutshell, traditional currency as we know it is called “fiat,” which is Latin for “let it be so.” This means that the value of currency is not backed by a valued piece of metal, like gold, or some other object; rather, it is backed by the will of the people and their governments who believe it to be valuable. Governments will create more money given the demand, take money out of the market if there is too much supply, and set an interest rate for lending money, all of which controls currency from a central location.  Banks, on the other hand, act as the middleman between the governments and the people. They have the ability to give loans, and set their own interest rates, which can be very high at times for the layman. [vii]

Bitcoin’s Implications on Traditional Currency

Many around the world are unhappy with the centralized system of money creation. Governments have a hand in inflation rates, which causes the currency for countries to be worth less; therefore, it costs more to buy goods and services. The Bitcoin cuts out banks as the middleman. Since they can be transferred on an exchange or from wallet to wallet, there is not an interest rate on the money. If the Bitcoin gains popularity, banks can no more make money from money. As for its impact on governments, aside from the initial investment of cash from various countries, fiat money does not need to be used as frequently, which insulates users of the Bitcoin from inflation fluxes of their own currency. [viii]

Putting It All Together

For the study of Information Security, the Bitcoin is a current, relevant event happening in the real world instead of in a classroom. There are security vulnerabilities to patch or exploit depending on the camp of understanding. The system is not 100% fool proof. The exchanges are hacked, digital wallets are stolen, coins have the potential to be mined in excess, and users who do not know the intricacies can perpetuate these problems, and leave themselves at risk for malicious attacks. Furthermore, given the nature of the Bitcoin’s anonymity and decentralization, many use it for underground, illegal purchases to evade law enforcement.

So what is the impact and value that can be gained from this? Like the Internet, which was an experiment of its own, the Bitcoin is a test. It highlights the edges that people are willing to go to connect to one another, especially when large amounts of money are involved. It is a succession of what we already know about the Internet- that it is free from cultures and boarders; it has its flaws, and people may use it for malicious attacks, but it was created with good intentions. The Bitcoin is the same, it is transcendent, inherently benevolent, and illuminates the creativity and ingenuity of the human mind.

Whether the currency renaissance has started is a question left up to each individual when examining crypto-currency; but I would argue that our traditional notions of money are about to be changed.


[i] Wallace, Benjamin. “The Rise and Fall of Bitcoin.” Wired.com. Conde Nast Digital, 23 Nov. 2011. Web. 02 Oct. 2012. <http://www.wired.com/magazine/2011/11/mf_bitcoin/&gt;.

[ii] Yang, Edward Z. “The Cryptography of Bitcoin.” The Cryptography of Bitcoin :. Inside 206-105, n.d. Web. 01 Oct. 2012. <http://blog.ezyang.com/2011/06/the-cryptography-of-bitcoin/&gt;.

[iii] “Everything You Want to Know About Bitcoin, the Digital Currency Worth More Than the Dollar.” Discovery Magazine. N.p., n.d. Web. 02 Oct. 2012. <http://blogs.discovermagazine.com/80beats/2011/06/10/everything-you-want-to-know-about-bitcoin-the-digital-currency-worth-more-than-the-dollar/&gt;.

[iv] Yang, Edward Z. “The Cryptography of Bitcoin.” The Cryptography of Bitcoin :. N.p., n.d. Web. 02 Oct. 2012. <http://blog.ezyang.com/2011/06/the-cryptography-of-bitcoin/&gt;.

[v] Ball, James. “Bitcoins: What Are They, and How Do They Work?” The Guardian. Guardian News and Media, 22 June 2011. Web. 02 Oct. 2012. <http://www.guardian.co.uk/technology/2011/jun/22/bitcoins-how-do-they-work&gt;.

[vi] “What Is a Good Way to Explain Bitcoin?” Questions and Answers. N.p., n.d. Web. 02 Oct. 2012. <http://www.weusecoins.com/questions.php&gt;.

[vii] Bade, Robin, and Michael Parkin. Foundations of Macroeconomics. Boston: Pearson Addison Wesley, 2009. Print

[viii] “Bitcoin, Gold, and Competitive Currencies.” James Turk Interview with Economist and Trader Félix Moreno De La Cova. N.p., n.d. Web. 02 Oct. 2012. <http://themonetaryfuture.blogspot.com/2012/09/bitcoin-gold-and-competitive-currencies.html&gt;.





Which is more secure: Linux or Windows?

2 10 2012

This has been a hotly debated topic for many years among the computer/technology community. With many vying for both sides, I decided to take a closer look into both OS to see which one was more secure. Now, when you talk about the subject of security there are many avenue’s that one can take. Some tend to talk about how well a problem can be found and mitigated since it’s an inevitability while others tend to look at the inherit security that lies within the structure of the OS. I will be focusing on a combination of the two and will be breaking down my findings into 4 groups: Privileges, Responsiveness to threats, The Monoculture Effect, and The Human Factor.

When speaking about privileges, the understanding of what a user has the ability to do and not to do come into focus. Words like administrator and super user come into the vernacular of both ordinary and advanced users. And in the security realm, privileges are a big deal and the amount of power a single user has over his/her system is vital to how secure the system can ultimately be. This fact is why Linux surpasses Windows when it comes to preventive security via limit of access. According to Katherine Noyes of PC World, “In Windows, users are generally given administrator access by default, which means they pretty much have access to everything on the system, even its most crucial parts. So, then, do viruses. It’s like giving terrorists high-level government positions.”[1] This is a problem because at any time a less than savvy user can go into their registry and completely destroyed their computer. A friend of mine once went into her registry and deleted her HKEYs because she heard they were viruses. Having that kind of power over your machine right at the starting line is extremely dangerous to one’s security and Linux handles that situation far better than Windows. Katherine Noyes continues by saying “With Linux, on the other hand, users do not usually have such “root” privileges; rather, they’re typically given lower-level accounts. What that means is that even if a Linux system is compromised, the virus won’t have the root access it would need to do damage system wide; more likely, just the user’s local files and programs would be affected.”[1] By restricting the user to an account with lower privileges you reduce the attack surface of a virus or an assailant. Now with time and knowledge the user will be keener on how the system works and will be able to access the super user or root level, so when it comes to privileges, Linux is the winner.

Now, even though Linux restricts privileges doesn’t mean they are immune to viruses or other threats because as a system created by fallible humans, things break, and mistakes are made, accidents happen. How we deal with these problems is a sign of how well our security is because as any security specialist knows, mitigation is just as important as prevention. Windows excels in response and mitigation when compared to Linux’s. Brier Dudley of the Seattle Times did a report comparing Windows Servers and Red Hat Servers to find out which one was more secure. In his article he said, “They compared Windows Server 2003 and Red Hat Enterprise Server 3 running databases, scripting engines and Web servers (Microsoft’s on one, the open source Apache on the other). Their criteria included the number of reported vulnerabilities and their severity, as well as the number of patches issued and days of risk — the period from when vulnerability is first reported to when a patch is issued. On average, the Windows setup had just over 30 days of risk versus 71 days for the Red Hat setup, their study found.”[2] The report showed that it 41 less days for Microsoft to find out about the problem and fix them and or patch them up. This is a big deal when you are a company who can’t afford to have their servers exploited for that long. You could potentially lose a lot of money if a problem like that cannot be fixed as soon as possible.

Another problem in favor of Linux is the dominant Monoculture that can be found in the Windows system. Nilotpal Chowdhury in his article “Why Linux is More Secure than Windows” he said, “The Windows environment has been likened to a monoculture. There is great homogeneity which makes it easier for crackers to write exploit code, viruses and the like. Compare this to the Linux world. Here, a program can be a .deb, .rpm, or source code, to name a few. This heterogeneity makes it difficult for crackers to have the widespread impact that is possible on Windows.”[3] This is just one example but there are many things that can be gleaned by this information. Windows has streamlined their production and made one system that everyone gets, which does not promote diversity between the systems. So when an attacker is planning his/her attack on a window system, he/she knows that every windows system is the same and that they don’t have to do additional reconnaissance. Also, they know that if they create a virus for one computer, the likely hood that it would affect a mass amount of computers is very high because everyone is on the same system. If there is a hole in the OS and no patch sent out yet, that hole will be in every single windows computer, making it a very good environment for attackers. With Linux, they have a diverse culture with so many versions of its OS that an attacker would find it extremely hard to attack a large set of Linux computers with one attack. This diversity makes Linux a little more secure than Windows.

The last and the most important factor to consider is the Human Factor. Ultimately, the security of the OS is only as good as the people using it. The people on the keyboards are the ones that controls how secure or unsecure a system can be. Naturally, if everything is done or handled the way they are supposed to, this conversation would not have to happen. But more often than not these systems suffer from audience sabotage. Whether you are running Linux, Windows, or even Mac, as a user you ultimately have the final say on whether your system will be secure or not. Does a user decide not to open an email that looks suspicious? Does a user decide to run logs in his/her computer for auditing purposes? Does a user minimize the attack surface of his/her system themselves? Does the user in even care? These are just a few of the questions that one has to think about when dealing with the human factor.

So, ultimately it is extremely hard to determine which one is the most secure because everyone has their own biases and belief system but if you want my opinion I would have to say that Linux conceptually is more secure, what do you think? Write down your responses in the comment box below.

__________

[1] Noyes, Katherine. “Why Linux Is More Secure Than Windows.” Why Linux Is More Secure Than Windows. PC World, 3 Aug. 2010. Web. 25 Sept. 2012. <http://www.pcworld.com/article/202452/why_linux_is_more_secure_than_windows.html&gt;.

[2] Dudley, Brier. “Study Finds Windows More Secure than Linux.” Business & Technology. The Seattle Times, 17 Feb. 2005. Web. 26 Sept. 2012. <http://seattletimes.com/html/businesstechnology/2002182315_security17.html&gt;.

[3] Chowdhury, Nilotpal. “Free Web Software Reviews.” : Why Linux Is More Secure Than Windows. Free Web Software Reviews, 20 Dec. 2007. Web. 25 Sept. 2012. <http://freewebsoftwarereviews.blogspot.com/2007/12/why-linux-is-more-secure-than-windows.html&gt;.





Security Perspective on Cloud Computing

1 10 2012

There’s a lively discussion about the cloud computing. It’s getting popular before majority people even get to really understand it, no mention realizing the security problems. So what is cloud? Are you taking the advantage of it? Before the discussion about the cloud security, let’s get to know it first.

What is cloud computing?

According to NIST, cloud computing is “a model for enabling ubiquitous, convenient, on- demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”1 It relates to several latest technologies like distributed system, utility computing, virtualization and so on. These technologies are changing the whole business model from selling physical production to service. Though it’s still a new technology, most people have enjoyed it. With drop box and Apple iCloud, People no longer have to save their data on physical disks, but only leave them on the network. Actually, big companies benefit more. With cloud, they can integrate resources on several servers, instead of installing them on every computer. It really saves them considerable expense on IT management and fix cost.

Security issues in cloud computing

What’s the bad news? Actually, there’re some highly mentioned security problems in cloud2, which prevent people getting close to it.

  • First, just like our computer administrator authority, if VM hypervisor is vulnerable, it’ll be a target of malicious people. If hackers control the hypervisor, they can manipulate customers’ private data, or provide malicious service. It charges a significant high requirement on VMs’ security capability to guarantee data confidentiality. Though many providers claimed high security level of productions, declaration was always broken.
  • Secondly, based on the implementation of cloud security itself, the more people use cloud, the more secure the cloud is. In brief, every client in the cloud net is a security monitor. The lager amount of monitors will make it easier to find an attack and send report to the server. It’s really a good implementation, but it requires collecting data from customer. Then who will be responsible for securing the data while the environment is on exposure to an attack? This is a question.
  • Finally, as a core feature of cloud: virtualization, compromises all host network flow3, which intensively attract hackers’ attention. How to take great advantage of this layer while avoiding intensive attack is a heated discussion problem.

Actually, the problems above not only present within cloud computing, they also exist in traditional datacenter network. But conventional security policy or encryption plan is not always compatible in cloud environment4. Cloud architecture requires dedicated design both in security policy and technical safeguard. Then, what’s the plan of those famous cloud providers?

  • Google claims that they split files into parts and store them in multiple files on different machines. 5Besides, with files randomly named, it’s really hard for a hacker or some malicious insider to steal certain file. Also they encrypt data and invite third party to intrude their system to test reliability. Finally, if the hardware goes bad,they will use the device called “the crusher” to destroy the data. Here’s a question, is there any plan for recovery the destroyed data or is there any backup policy?
  • Apple featured their cloud service (iCloud) by claiming that data will be encrypted both in transmission and storage.6

A Crucial Truth of Cloud Security

Whatever the providers claimed, security breach always happened. For example, online storage service drop box was hacked and led to many of its members received trash emails this August7. Then we should ask: who is responsible for cloud security? Surprisingly, it’s us, instead of service provider! NIST pointed out: “Accountability for security and privacy in public cloud deployments cannot be delegated to a cloud provider and remains an obligation for the organization to fulfill.”8

Do you have a plan for cloud security?

The statement from NIST leaves people scratching their heads about protection of data stored on a remote machine, which they don’t even know where the server is. There’re several least protections we cloud user can do to protect our data:

  • Do remember to backup important files both on cloud and local disks.
  • Do not use the same user-ID and password on different sites.
  • Do not link all of your accounts together.9

In closing, cloud computing, as a newly developed technology, will face serious challenges in a long period. It requires careful design on security policy, technical protection and related law. It will surely benefit us to a great extent and ultimately change the relationship between computer world and human being. But before that, be sure that you already have a nice plan for cloud security.

_____________

  1. 1 NIST: Special Publication 800-145. The NIST Definition of Cloud Computing. http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf
  2. 2 Vic Winkler. Cloud Computing: Virtual Cloud Security Concerns. TechNet Magazine, December, 2011. http://technet.microsoft.com/en-us/magazine/hh641415.aspx
  3. 3 Kathleen Hickey. Dark cloud: Study finds security risks in virtualization. March. 8, 2010. http://gcn.com/articles/2010/03/18/dark-cloud-security.aspx
  4. 4 Securing the cloud – VMware white paper. http://www.savvis.com/en- us/info_center/documents/savvis_vmw_whitepaper_0809.pdf
  5. 5 Dan Rowinski. How Dose Google Protect Your Data in the Cloud? July 22nd, 2011. http://www.readwriteweb.com/archives/how_does_google_protect_your_data_in_t he_cloud.php
  6. 6 iCloud: iCloud security and privacy overview. http://support.apple.com/kb/HT4865
  7. 7 Mark Prigg. Cloud safety: Internet storage service Drop box admits security breach as fears grow over storing information online. Mail Online, Aug 1st, 2012. http://www.dailymail.co.uk/sciencetech/article-2182229/Dropbox-Storage- service-admits-security-breach-fears-grow-storing-information-online.html
  8. 8 NIST: Special Publication 800-144. Guidelines on Security and Privacy in Public Cloud Computing. http://csrc.nist.gov/publications/nistpubs/800-144/SP800- 144.pdf
  9. 9 John D. Scutter, CNN. How to protect your cloud data from hacks. Aug. 9, 2012. http://www.cnn.com/2012/08/09/tech/web/cloud-security-tips/index.html