ACM Queue has an interesting article on the security measures taken in the Google chrome web browser in an attempt to thwart the attempts to attack and exploit the weaknesses of a browser.
The article nicely summarizes the three main things to achieve the above goal:
- Mitigating or nullifying the actions that are caused by vulnerabilities.
- Push updates frequently.
- Warn users about malicious sites with the help of a global database of malicious sites.
The first part consists of two things: try preventing the damage in the first place and if the damage happens, keep the damage isolated so that it won’t have any side effects. Measures can be taken to prevent malicious code execution with the help of OS/hardware/tools. Techniques such as:
- Data Execution Prevention: Mark Nx [not executable] flag on pages that has heap/stack etc. so that when buffer overflow and other such flaws are exploited to crop code in stack or heap, execution of the same can be prevented. The process will just crash.
- Stack overflow check: A small random value is placed in between the top of stack and the return value. While returning, that small value is checked for. If it is not present, then that is a case of a stack overflow. This feature is provided by the compiler. This is so simple a technique and I wonder why modern compilers do not have this feature by default.
- Address Space Layout Randomization: This seems to be a new feature where the data/stack/heap sections start are different addresses unlike the current way of starting them at well known virtual addresses in the process address space. This makes identifying those sections difficult.
- Heap Corruption Detection: This is not very cleanly achievable unless the virtual machine supports it as a native feature.
That completes the first part. The second part is about pushing patches painlessly to the clients. While it is still not possible to apply patches without rebooting the browser (what, huh! Linux has a way to apply kernel patches without rebooting the kernel), Google has still come a long way to make it simpler. The updates that are pushed are incredibly small – because of their smart diff tool Courgette. The net effect of this is that updates can be pushed faster as well as more updates can be pushed which means vulnerabilities are fixed more often and sooner.
The last part of the job is to inform user before hand about visiting a potential malicious site. This job is technically simple when compared to the above two jobs. Colloborate with a site (StopBadware.org) and keep an updated list of malicious sites. There is no need to push user URL to the website, the browser can download the list (or a homomorphic form of the list) and check whether the user is entering a malicious website. This is the simplest of the three jobs. Prevention is better than handling which is better than cure.
One thing worth noting is the extent of automated testing done by Chrome engineers to assure the quality of the product. In their own words:
The Google Chrome team has put significant effort into automating step 3 as much as possible. The team has inherited more than 10,000 tests from the WebKit project that ensure the Web platform features are working properly. These tests, along with thousands of other tests for browser-level features, are run after every change to the browser’s source code.
In addition to these regression tests, browser builds are tested on 1 million Web sites in a virtual-machine farm called ChromeBot. ChromeBot monitors the rendering of these sites for memory errors, crashes, and hangs. Running a browser build through ChromeBot often exposes subtle race conditions and other low-probability events before shipping the build to users.
All in all, professional act!