I like breaking stuff! Whenever I try a new feature it breaks. This used to be an annoying “skill” when I started programming but over the time I learned to appreciate it.
Currently I am testing various memory allocation strategies for Firefox. Over the last few weeks we have learned how important good memory allocation is. We saw the impressive memory reduction for regular workloads but I was still a little bit worried about huge workloads. How do we scale for 100+ tabs?
Testing scalability shouldn’t be too hard. Take the browser and open many many tabs. It gets boring if you do it manually so I borrowed a script from Nick Nethercote. The new version includes about 150 web pages from the most popular web page list. The script opens a new page every 1.5 seconds until all 150 pages are opened, waits 90 seconds until all pages are loaded and shows a text-box that the test has finished. I close all windows except one and close the browser afterwards. The results are measured with the time command on my 1.5 year old Dual-Core MacBook Pro with 8GB RAM. The script can be found here if you want to try it yourself.
For a current nightly build of Firefox I get following:
I also tried it with a canary build of Chrome:
Huh that’s a big difference! I realized that Chrome has a hard time opening new sites after about 70 open pages. With 150 sites I can’t even scroll on a normal page. Firefox instead is still pretty snappy and scrolling is like there is no other open tab.
So what’s the reason? Firefox has a single-process but multiple compartments model and uses 27 threads and 2.02GB RAM for all 150 tabs. You can find a short or long description about our compartment model.
I wrote a new script with this workaround and get following results:
Now I see 43 Google Chrome Renderer, the main Google Chrome and a Helper Process. The resident size in about:memory is a little bit above 5GB and the browser becomes unresponsive. I have to close the browser without closing individual sites because the close-windows button in my script doesn’t work with the multi-process model. I also notice an uneven mapping between sites and processes. Some processes only host 2-3 sites and one process hosts about 50% of all sites. Maybe a bug? The main Google Chrome process has 368 threads with 150 open sites and up to 420 during browser shutdown. A regular renderer process has 6 threads. Well, all that complexity but the system still doesn’t scale. It even got worse. Towards the end of the test I can see that the browser performance stagnates and opening a new site takes forever.
My ultimate test is running the V8 benchmark after all 150 pages are fully loaded.
Firefox Score: 3954
In comparison our scores with a single tab:
Firefox Score: 5125
I also tried running the V8 benchmarks with Chrome but the browser stopped rendering and the main Google Chrome process was always at 100% CPU performance.
My conclusion: If you have many open tabs, use Firefox!