Detecting Browser Vulnerabilities Prior to Client Exploitation

2 11 2012

IBM’s X-Force’s “2012 Mid-Year Trend and Risk Report”, announces a great increase in browser exploits.Browser exploits and the threats they possess toward the clients have been a great concern for a long time and lots of research has been done on figuring out the malicious websites.[1]

Various browser-developing organizations have conducted projects on how to find these exploits and how to provide a safer environment for the clients. The most common ways of finding the vulnerabilities includes the browser penetration tests done by experts, bug reports send by users, and finding exploits by white hackers.

As an example in March of 2012, a hacker identified by Pinkie Pie founded a full exploit in Chrome during a competition at Hack in the Box 2012 conference and won $60,000. In 12 hours after the vulnerability was demonstrated Google engineers released a fix.[2]

Moreover, Google has provided a service called Safe Browsing that enables the users to check URLs against a list of suspected pages. There are two experimental APIs available for using the service, Safe Browsing API v2 and Safe Browsing Lookup API. The two APIs are different in ease of implementation, handing user privacy and the response time.[3]

As we talked about different methods for enhancing the security of the browsers, you can see that most of methods and examples mentioned above involve learning from bad experiences users had. The users are exploited at first and then the problem is fixed. Usually the exploits remain unknown or unreported for a certain amount of time, and even if they are detected and fixed, it takes a while for clients to update the browser. The damages that the malware caused may never be repaired. Hence, the browser developers have to do their best to fix any vulnerability prior to releasing the browser.

With hope of detecting the malicious websites on 2005, a research project started a web crawling project on 18 million URLs and continued doing it over a period of time.[4] The problem with this was that crawling the entire web and fully investigating all URLs took a long time and required vast resource allocation.

Around the same time, Microsoft performed a research project with the hope of detecting the vulnerabilities of an unpatched browser where an attacker uses to install malware on users computers, this project was called the “Strider Honey Monkey Exploit Detection”. Articles regarding this project started to circle during May of 2005 and Microsoft Filed the patent for it by the end of 2007.[5] This exploit detection system analyzed an input including suspicious links or the links that will affect a large amount of users, if malicious(e.g. most popular users). The problem with this project was that the results were not conclusive, as the test was not done on, the whole, World Wide Web.

The solution is to mix the advantages of both mentioned methods to have a conclusive result. By doing a lightweight crawling, a series of suspected websites can be recognized; then the full analysis would be done on the list of suspected URLs. The search engines providers, such as Google, that are already crawling the web and have the facilities, can do the initial stage and create the initial list.

 


[1] Michael Cooney. “IBM cyber security watchdogs see increase in browser exploits and encryption abuse”. 21 Sept 2012. 7 Oct 2012. <http://www.networkworld.com/community/node/81449&gt;

[2] Jeff Goldman. “Hacker Pinkie Pie Finds Chrome Exploit, Gets $60,000″. eSecurity Plants.11 Oct 2012. 12 Oct 2012. <http://www.esecurityplanet.com/browser-security/hacker-pinkie-pie-finds-chrome-exploit-gets-60000.html&gt;

[3]” Safe Browsing API”. Google Developers. 10 Apr 2012. 5 Oct 2012. <https://developers.google.com/safe-browsing/&gt;

[4] Alexander Moshchuk, Tanya Bragin, Steven D. Gribble, and Henry M. Levy. “A Crawler-based Study of Spyware on the Web.”

[5] “Strider HoneyMonkey Exploit Detection”. Microsoft Research. 20 Jan 2010.  5 Oct 2012. <http://research.microsoft.com/en-us/um/redmond/projects/strider/honeymonkey/default.aspx&gt;

Advertisements

Actions

Information

One response

8 11 2012
Amar Bhosale

Nice blog.. it would be interesting to check how browsers extensions can help (if at all) in this regard.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: