Is my Data secure in the cloud?

31 10 2011

There is an increasing acceptance and use of cloud computing products and services. I’m a big fan of software applications such as Google docs and Drop box. These tools allow me to store, organize, and collaborate on files and instead of emailing these by email which is difficult with large size files I am able to store my files in the cloud and access it to my data anywhere at any time, I don’t really need a portable memory or carry my laptop in order to access my information or backup my data (the cloud does it for me). But what if I need to store more confidential information? Should I save my tax returns, financial information such as bank statements or private correspondence with friends and family in the cloud?

Recently, Drop box had a security breach on June 2011 in which a failure in their authentication mechanism allowed access to files without the use of password for about 4 hours. According to the company the problem was caused by a “code update” that “introduced a bug affecting the authentication mechanism.”

I wonder if I would have had sensitive personal information that could be compromised or lost, would they be responsible for it? What protection do I have as a user of their services? In order to answer these questions, I reviewed their Terms of service and Privacy policy and I found some interesting things I would like to share.

 Terms of Service

Google has the license to reproduce, modify, translate, publish, publicly display and distribute the content which I submit, post or display using their services. Also, my content could be sent to third party companies.

Google does not guarantee the use of their services will be timely, secure or free from error and that I am solely responsible for any loss of data that results from the download of any material obtained through their services.

Similarly, Drop Box indicates I am solely responsible for any loss of corruption of my “stuff” (data) and if I want to protect the transmission of my data it is my responsibility to use a secure encrypted connection to communicate with their services. Should Drop Box enforce every page to use https?

Privacy Policy

Google as well as Drop box, generally use third party companies in order to process of personal information. So, other companies end up having access to my personal information.

Google and Drop Box, indicates that in case of a merger or acquisition, my personal information could be transferred and become subject to a different privacy policy.

Drop Box, uses persistent cookies to save my registration ID and login password for future logins. Since these cookies aren’t marked as secure (HTTPS only), will it be safe to use drop box in an unencrypted or WEP wireless network?

In conclusion, it seems that they are not responsible for any losses of data, business or profit damages as consequence of malfunctioning. Is that fair?

Who is responsible for the security?

According to a study about security of cloud computing providers performed by Ponemon Institute and sponsored by CA Technologies (published on April 2011). It is important that cloud users be educated about the need to evaluate cloud applications and choose the ones that gives greater security mechanisms to protect their data.

The study was made in the U.S.A. and Europe and included the current cloud user community as well as the cloud provider community one of the questions was who is more responsible for insuring the security of cloud resources? According to Cloud providers 69% of them indicated the customer is responsible for securing the cloud and not them. According to the users, the responsibility should be shared between the cloud provider and the user. The following chart demonstrates the answer to the question:

 Source: Ponemon Institute (Security of Cloud Computing Providers Study)

“Security in the cloud is a joint responsibility and cloud users and providers should consider the importance of working together to create a secure and less turbulent computing environment.” ( Ponemon Institute – security of cloud computing providers, Page 15).

Security of Cloud Computing Providers Study, Independently conducted by Ponemon Institute LLC

Publication Date: April 2011 Sponsored by CA Technologies.


Honeypots: Silent but Effective

17 10 2011

by Francisco Robles

Everyday we see in the news companies have had their systems compromised, with different consequences in each case. But that is for the most famous cases. People who are in charge of the systems security within a company have to struggle everyday with attack attempts from both insiders and outsiders that want to break-in to get unauthorized access to a certain resource (computer, program, data, information, etc.).

Technology evolves at a very fast pace and so do the methods and tools used by attackers. The traditional approach has relied in IDS (Intrusion Detection System) with its limitations that are known (false positives, false negatives, data overload) along with Firewalls and other passive systems [1].

Because of that, a different approach has been coined: the honeypot. Some authors refer it as a specialized version of an IDS [2] where others, such as Lance Spitzner, defines as:

A honeypot is an information system resource whose value lies in unauthorized or illicit use of that resource.

No matter what definition is used, the main purpose of a honeypot remains the same: to be able to detect and understand how an attacker wants to compromise this system.  “A honeypot is a resource, which pretends to be a real target. The main goals are the distraction of an attacker and the gain of information about an attack and the attacker” [4]. The honeypot is set to emulate a production system, so it can be attractive to an attacker. It is also a trap, because the attacker does not know that he/she is being monitored. This system is configured as an isolated party, where no legitimate system will have a communication with it, thus if any kind of communication is detected, there is a high probability that it is caused by an intruder. So the honeypot will remain silent until someone accesses it; then it will emit an alert.

There are two main uses of honeypots: production and research. The first one is focused to help mitigate risk whereas the second use is oriented to gather as much information as possible. [5]

According to Spitzner, the following are the advantages of a honeypot versus an IDS [3]:

–   Small data sets of high value. Instead of collecting all the traffic and then having to analyze huge amounts of data in order to detect abnormal activity, the honeypot only collects data when it interacts with someone. Thus the data collected is smaller but more valuable.

–   New tools and tactics. Honeypots are designed to be attractive to attackers, so they can detect new methods and tools unseen before. All of this without the necessity to update anything in the honeypot.

–   Minimal resources. They only capture bad traffic, so there is no need for fancy equipment.

–   Encryption or IPv6. It does not matter which technologies the attackers use, the honeypot will detect and capture it.

–   Information. Honeypots can collect in-depth information.

–   Simplicity. Honeypots are conceptually very simple. There are no fancy algorithms to develop, state tables to maintain, or signatures to update.


But no system is free of disadvantages and the honeypot it’s not exempt. A honeypot is only useful when an attacker interacts with it; then its field of sight is very narrow. Also there is a risk, as with any equipment, with the use of a honeypot. There is a probability that this system may be compromised by an attacker and used to attack other systems. This risk depends on which type of honeypot is used. Another risk is when the attacker discovers that he fell in a trap, so it may seek revenge with the help of other parties.

The two main types of a honeypot are low-involvement and high-involvement. The former only offers fake services, which can be configured, but the operating system is very controlled. This limits the impact in the event an attacker compromises this system. The later has a real operating system that will permit to gather further information about an attacker and its procedures as well as to disguise better the trap. But there is a greater risk with this approach, because the attacker may do more damage if he/she gets control of this computer [3].

Usually low-involvement honeypots are used in production systems whereas high-involvement honeypots are used for research. But this is not a written rule; both types of honeypots can be used for both purposes.

The effectiveness of a honeypot will be determined by many factors, some of them are:

–   Capacity to mimic a legitimate system. Whereas a honeypot is disguised as an authentic system used by the organization, it will be more attractive to the attacker to interfere with it. There are some products that are resistant to fingerprinting techniques.

–   Infrastructure around the honeypot. The use of IDS and firewalls along the honeypot will increase the capacity to learn more about an attacker. So the honeypot it is not a replacement for any other security system, but it is designed to coexist with such systems.


There are other approaches that use the main principles of honeypots. One is a honeynet, where a complete isolated network with clients and servers is setup. It is even loaded with vague data and programs that resemble to be authentic. Another is a honeytoken, which per se is not a computer but a digital entity (bogus Excel file, credentials, etc.) that no one should interact with it unless it is an unauthorized party. [6]

As a conclusion, honeypots represent a different approach to identify and combat attackers. Because it is always there and only acts when someone interacts with it, the data that gathers will be very useful, thus reducing the chance for false positives and false negatives.  But this advantage is risky because its field of view is very limited, that is why it was designed to coexist with other systems like IDS and not as replacement for them.

The idea to attract an attacker will help to either distract him of attacking real computers or to raise alerts immediately so other measures can be taken opportunely. Its simplicity is what it makes very attractive to security personnel, because does not rely on rules or signatures that become obsolete overtime.

Also the same concept can be applied to whole networks (honeynet) and even digital entities (honeytoken).


[1] “Honeypots: Simple, Cost-Effective Detection” by Lance Spitzner

[2] “Intrusion Detection FAQ: What is a honeypot?” by Loras R. Even

[3] “Definitions and Value of Honeypots” by Lance Spitzner

[4] “Enhancing Network Intrusion Detection System with Honeypot” by Yeldi, S.; Gupta, S.; Ganacharya, T.; Doshi, S.; Bahirat, D.; Ingle, R.; Roychowdhary, A.;

TENCON 2003. Conference on Convergent Technologies for Asia-Pacific Region. IEEE.

[5] “Honeypot: a Supplemented Active Defense System for Network Security” by Feng Zhang; Shijie Zhou; Zhiguang Qin; Jinde Liu; Parallel and Distributed Computing, Applications and Technologies, 2003. PDCAT’2003. Proceedings of the Fourth International Conference on. IEEE.

[6] “Honeypots: Catching the Insider Threat” by Spitzner, L.;

Computer Security Applications Conference, 2003. Proceedings. 19th Annual. IEEE.

Supercookies (not too sweet…!!)

17 10 2011

“Too bored doing assignments..Let me stream and watch the latest episode of Dexter”.A thought which has  probably crossed a million minds.Little do we know that a harmless visit to an online tv streaming site could lead to us being tracked by Spotify or MSN[1][2].Anybody who has even little experience with computers and web browsing knows about the concept of cookies.They know that cookies are an unavoidable woe,albeit helpful in certain cases.Disabling cookies or flushing them out altogether does not help us much with the tracking[1].Most tracking techniques available in the market today are smarter than that.Sites like Hulu,MSN,Flixter and spotify use (or used till some time back) a method called supercookies to track user behaviour and record data without the user’s knowledge[1][2].

Supercookies,also called Flash cookies or Zombie cookies[1] collected user data which exceeded the usual extent to which data was collected in the industry thus giving rise to major privacy concerns.The challenge posed by supercookies is mainly due to the fact that these files are not stored in the usual cookie locations[1] .Thus making it extremely difficult for users to find and delete these files.For example,these files are sometimes stored in a file used by Flash.It then uses a little known technique of Flash  to save the unique ID numbers and then later reuse it to spawn traditional HTML cookies after checking its secondary stash for matching user Ids.Another potential cause of harm by these supercookies is that they are not detected by the browser’s cookie detection system.Researchers described another cache cookie method using Etags which can uniquely track users even when all the cookies are disabled and ‘Private Browsing Mode ‘ is enabled.[2]

Supercookies are usually the outcome of the fruitful relation of a site with an advertising firm which stresses and gives  a lot of importance to user behaviour analysis[1].Kissmetrics is one such data collection firm which uses the supercookies technique to gather data about the user’s website browsing preferences.For example,it tells if the user that just visited their site using a google browser is the same who visited Hulu by clicking on an ad in Facebook.It does this by storing a unique Id(associated with that user) and maintaining that trail even if the cookie history is flushed[3]. Due to the fact that supercookies track the users across multiple sites as compared to cookies whose domain was limited to that particular site which installed it,the inherent privacy concerns are large.[1]

In todays competitive day and age its understandable if a site desires to track its users and their choices while on their site ,but keeping a tab on them even when they have nevigated away from your page is ethically another ballgame altogether.The so called ‘right to track’ is definitely worded in the terms of use or user agreement which the user is made to sign when he signs up for a particular site but in all honesty who ever bothers to go through the extremely lengthy,carefully documented ‘agreement’ scripts .For the users,tracking equates to a breach of trust,a brand betrayal of sorts.Many companies ‘unknowingly’ use this technique and stop using it when it is pointed out to them,others not so. [1]

Once the data is collected companies have an added responsibility to protect that data because the consequences of losing it can be bad.Consumers may no longer trust the brand.Protection of data becomes even more important to avoid legal liability.[1]The increasing concerns about user privacy led the Federal trade Commission(FTC) to force changes and push for the formulation of regulatory policies regarding invasion of user privacy.The Internet and Marketing Industry responded by making certain self-regulatory policies which restrict them from looking into nothing other than medical records[1].Apart from privacy,these hard to find files can also be a major security threat.If  these files are infected with a trojan,detection and prevention can be tough acts[2].Hence it becomes extremely important for users to become more aware of such practices and ensure that such techniques are not used extensively.Its entirely in our hands to not become mere guinea pigs in the world of advertising and marketing strategies.

Soltani aptly summarizes this race saying,”This is yet another example of the continued arms race that consumers are engaged in when trying to protect their privacy online since advertisers are incentivized to come up with more pervasive tracking mechanisms unless theres policy restrictions to prevent it.” [3]


[1]Christian Olsen.”Supercookies:what you need to know about the web’s latest tracking device” 2 September 2011 <>

[2]Michael Anderson .“Invasion of the supercookies” 18 august 2011<>

[3]Ryan Singel”Undeletable cookie” 29 July 2011 <>

Compliance Concerns of Cloud Security

12 10 2011

With list of companies like Google, Amazon, Sales Force, Microsoft, VMware and many others aggressively working on this domain, suggest that this is big and will be growing. Now, with clouds potential to host services like IAAS, SAAS and PAAS, IT management for many of the user organizations will be far simpler and will reduce the cost as well along with many other advantages. These services are provided in the four deployment models. They are Public, Community, Private and Hybrid model.

However, there has been growing concerns on Compliance with Cloud Security in public models of deployment. This needs to be addressed in virtual as well as in physical environment.

The regulation of the specific industry like in health services, financial services and insurance also adds to the complexity of compliance and governance required[i].  It also needs to address the issue of the cross boundary where some information should not cross the boundaries of the country. In some cases, it may violate the national regulation for privacy and audit which governs that organization. Also, the cloud service provider must be complaint with the compliance policies so that the integrity of the data can be maintained. It also possesses an insider threat. One other concern is how the data and information would be destroyed if they switch between the cloud vendors.

To address these concerns, we can choose cloud based services in a more judiciously level like what data needs to uploaded on cloud and who should access that data and with what accesses. We can secure the information with better compliance policies.

Like authentication and authorization policies must be applied so that data and information is only accessible to the concerned personnel only2. Also, confidentiality must be maintained even for the data on cloud. The policies can help the information to be protected at the client as well as vendor location in case of insider threat or if the data is industry specific. Also, if the data or the information is highly confidential and its loss can jeopardize the existence of the company, then company can also opt to host such information on its location and remaining services from the cloud. Security policy like sensitive information labeling policy and sensitive information distribution policy must also be incorporated for a company and a vendor. This will help the unauthorized access to its employee as well as to the admins of the vendor, or access can be granted with limited permissions. The contract should also be in place with vendor for implementation of secure information disposal policy which might be required on termination of contract or when the time has arrived for disposition of data. An audit of vendor’s cloud service center should be done and it must be disclosed with the customers. This will help its clients to be security complaint and will make the vendor aware of the non complaint issues.

The industry specific cross boundary concerns can only be addressed with the help of a vendor (in case of public cloud model), where he must agree to disclose his location of the data and service center.

The cloud vendor if acting as the extension of the client organization will greatly reduce the compliance concerns of cloud and will greatly reduce the security issues.

Computer Forensics

11 10 2011

If you have ever watched a modern TV crime drama such as CSI or Law and Order, chances are you have seen the “tech geeks” who are brought into a crime scene to investigate a computer and recover data and files for the investigation.  What you may not know is that these people do actually exist in the real world, and they are actively working each day to bring criminals to justice.  Their efforts help to find digital criminals around the world, and are an important part of digital crime investigations.

US-CERT defines computer forensics as “the discipline that combines elements of law and computer science to collect and analyze data from computer systems, networks, wireless communications, and storage devices in a way that is admissible as evidence in a court of law” [1].  Compiling this evidence is an important part of the investigative process for the two primary types of computer investigations, when one or more computers were used as an instrument to commit a crime or some other type of misuse, and when the computer or network is the target of a crime [2].  While analysis of the collected data is what ultimately provides the necessary evidence, it can sometimes be difficult to collect the information in the first place.

The two basic types of data collected by investigators are persistent data and volatile data.  Persistent data is the data that is stored on a local hard drive (or another medium) and is preserved when the computer is turned off [1].  Volatile data is any data that is stored in memory, or exists in transit, that will be lost when the computer loses power or is turned off [1].  It is the volatile data that can be difficult to collect, as it can be easily lost during the collection process if the investigators are not careful.  Additional complications to collecting data are damaged, deleted, or encrypted files that require investigators to use the correct tools to prevent further damage to the files during the collection process [1].

One example of forensics assisting in a court trial is provided in the recent information released regarding the death of pop singer Michael Jackson.  The computer forensics examiner in the trial recovered critical timeline emails, digital medical charts thought to be non-existent, and a damaging audio recording of an impaired Michael Jackson reportedly made by his personal doctor, who is on trial [4].  This example showcases the ability of computer forensics to recover data that is believed to be lost or undiscoverable.  Modern methods and training are still evolving and improving, increasing the number and skills of individuals who can provide support in cases such as this.

If you are wondering how you can get your foot in the door to the computer forensics world, one example of a training certification program is the Computer Hacking Forensic Investigator certification provided by the EC-Council.  This certification program provides people with the “necessary skills to identify an intruder’s footprints and to properly gather the necessary evidence to prosecute in the court of law” [3].  Such individuals are in-demand, and can apply new and evolving technologies in order to recover evidence in the field.  While certification programs such as this one provide the training, it is also important to acknowledge that methods must adapt as the technology evolves.  For this reason, research facilities like CERT are looking into new methods for computer forensics.

At CERT, the forensics team works on “gap areas” that are not addressed by commercial tools or standard techniques [5].  These areas include resource amplification, memory extraction and analysis, and encryption counter-measures [5].  The study of these areas is intended to improve the performance of computer forensics and increase the ability of investigators to recover and analyze data.  If successful, the success and quality of digital investigations would be greatly improved.  As the field continues to evolve and improve, I think it would be great to be on the cutting edge of innovative ideas and techniques for recovering and analyzing data that can aid in the capture of cyber criminals.  While you may not have your own trailer or dressing room, you could be a real-life TV star working to bring criminals to justice, though you may have to bring your own camera.



Tracking Cyber Criminals

10 10 2011

Privacy is always threatened when confidentiality is lost or seized in any cyber crime. Cyber crimes can be in any form such as identity thefts, frauds, spams, viruses and so on. Targets of cyber crime range from end users to large organizations. Most of us have been targets of cyber crime at some point in some form.

“The Internet Crime Complaint Center, also known as IC3, serve as a hub to receive, develop and refer criminal complaints regarding the rapidly expanding occurrences of cyber-crime”.[1] According to 2010 “Internet crime report, IC3 receives and processes 25,000 complaints per month and in 2010 IC3 has received second highest number of cyber complaints”[2]. The report shows the extent of cyber crime happening around. Question at hand is how do these centers track cyber criminals?

IC3 has a reporting system which captures and tracks the information given by the targets such as name, mailing address, telephone number, web address, specifics on the fraud and any other relevant details. Details given by the targets can be very valuable while tracking down the cyber criminals.IC3 tracks these reports and analysts prepare cases against cyber criminals based on this information provided. In 2010, IC3 analysts prepared 1420 cases (representing 42,808 complaints). 2As per 2010 Internet crime report -“Out of referrals prepared by FBI analysts, 122 open investigations were reported, which resulted in 31 arrests, 6 convictions, 17 grand jury subpoenas, and 55 search/seize warrants”2. Overall, IC3 is doing a superior work of investigating and tracking these cyber criminals.

One example of tracking the cyber criminal with the assistance of the target happened in UK. HMA generally used to keep a log of information of all logged in users including login and logout to trace any illegal activity. HMA helped FBI by providing the IP address of the suspect of the crime. Due to their timely help, a member of LulzSec hacker group was arrested in September 2011. This group previously had been involved in various illegal attacks including attack on Sony’s Playstation Network[3]. Company’s help by providing relevant specifics can be very useful in tracking the criminals in cases like this.

“Cody Kretsinger, 23, of Phoenix was accused of participating in a hack of the Sony Pictures website that exposed the names, email addresses, and passwords of thousands of consumers. He was arrested on charges of conspiracy and the unauthorized impairment of a protected computer”[4].

“Dozens of people in North America and Europe have been snared in a trans-Atlantic investigation into a string of attacks attributed to LulzSec and Anonymous. Earlier this month, UK police charged three men and a 17-year-old for various computer offenses, including DDoS attacks in December on PayPal, Amazon, MasterCard, Bank of America, and Visa”4.

All these instances show that all these agencies are putting all the efforts to make sure that cyber crimes are reduced to an extent. It is also important for end users or consumers like us to report to these agencies about any suspects of attacks which would help in tracking the cyber criminals.

Wrong Answers from Authoritative Parents: Building Trust through Convergence

10 10 2011

by Christian Roylo

Do you remember throughout your childhood when it seemed that your parents knew the answers to everything?  Then, as you reached your teenage years you figured out that no matter how much you trusted your parents, they did not always have the right answers?  You then turned to other sources for answers: your friends, teachers, extended family members, magazines, books, and television.

The state of practice in Internet security seems to now be in the “teenage” phase of life: a realization that that the traditional “authoritative parent” models of security may not always produce the right answers, and sometimes cannot even be trusted.  Driving factors for this paradigm shift include the arguments that authoritative parents are getting too old to handle their teenagers (current Internet security models were based on models designed for a “younger” Internet), or that parents are becoming “big brother” (interested in monitoring secure communications).  Thus, it is natural for Internet security models to follow the move towards dynamic and distributed trust models.

Bad analogy aside, there has been recent research suggesting that the current SSL Certificate Authority security model, which is based on trusting traditional “authoritative parents”, is flawed [1] and one proposed fix, dubbed Convergence, utilizes a dynamic and distributed trust model that could help users protect themselves against SSL certificate related attacks.

Security researcher Moxie Marlinspike introduced Convergence at the most recent BlackHat and DefCon conferences where he explained that he based his model on the “Perspectives” research project conducted at Carnegie Mellon University.  Unlike the traditional Certificate Authority system that employs a set list of immutable Certificate Authorities, Convergence works on the principle of collective trust through utilizing servers called “notaries”.  These notaries will verify a web site’s certificate by viewing them through different “network perspectives”, comparing certificates by observing them from different networks and geographic locations. [2]

The system was designed to address some of the security weaknesses of SSL such as the issuance of fraudulent certificates, which could be used to conduct man in the middle attacks and surreptitious monitoring.  Marlinspike’s release of Convergence could not have been timed better.  It was only months earlier that Certificate Authority Comodo Group Inc was attacked by an Iranian hacker who tricked Comodo in issuing fraudulent certificates for Gmail, Yahoo Mail, and Hotmail. [3]    Shortly after Convergence’s release, CA DigiNotar was attacked, resulting in the issuance of 531 fraudulent certificates for web sites such as,,, and other root CAs. [4]   Just today, GlobalSign is being reported to have stopped issuing SSL certificates while it investigates claims that it was a recent victim of an attack. [5]

The framework for Convergence is based on the idea of “trust agility” which consists of two fundamental principles that are missing from the current CA model.  This is described in a blog post by Marlinspike as [6]:

  1. A trust decision can be easily revised at any time
  2. Individual users have the option of deciding where to anchor their trust

Convergence, which is currently in beta-testing phase, is a web browser add-on that will replace the existing CA infrastructure.  When a user initiates a web site visit, Convergence will compare the certificate obtained from the visit initiation to the certificates that were obtained by the notary sites.  A mismatch of certificates indicates a fraudulent certificate.   Convergence, which can be downloaded at , is currently only available for the Firebox browser.  Google announced that it is currently not planning on implementing it in Chrome.  [7]

Although Convergence is in its infancy, a big challenge for Marlinspike is reaching a critical mass of users.  This may help to influence Google and Microsoft to include it in its browsers and companies to sponsor additional notary servers.  However, Convergence had received a big boost when just last week, security firm Qualys stated that it would finance and support two notary servers [8].

As highlighted by the development of Convergence, Information security development should start to follow the natural progression of Internet itself.   As Internet innovations move towards distributed, cloud, social, web-of-trust, and crowd-sourced models, we should see (and welcome) information security models moving in this direction as well.

Whether Convergence is successful in reaching critical mass is something we will have to wait to see; however the fundamental principle of moving trust away from centralized authoritative parents to distributed models will likely be the future of security.  It is as certain as the teenager, mentioned at the start of this post, growing up to become a parent himself, and his teenage children turning elsewhere to seek answers.  When that happens, the cycle starts all over again and someone will come along to develop Convergence’s successor.


[1] Higgins, Kelly Jackson, “Researcher Exposes Flaws in Certificate Authority Web Applications”, 8/2/09,

[2] Marlinspike, Moxie, “BlackHat USA 2011: SSL And the Future Of Authenticity”,

[3] Bright, Bright, “Independent Iranian Hacker Claims Responsibility for Comodo Hack”, 3/28/2011,

[4] Prins, J.R. “Interim Report, DigiNotar Certificate Authority breach ‘Operation Black Tulip’”, 9/5/2011,

[5] Leyden, John, “GlobalSign stops issuing SSL certs, probes hacker claims”, 9/7/11,

[6] Moxie Marlinspike, “SSL And The Future of Authenticity”, 4/11/2011,

[7] Goodlin, Dan, “Google: SSL alternative won’t be added to Chrome”, 9/8/2011,

[8] Goodin, Dan, “Qualys endorses alternative to crappy SSL system”, 9/30/2011,