Welcome to the thrilling world of V8, where speed is not just a feature but a way of life. As we bid farewell to 2023, it's time to celebrate the impressive accomplishments V8 has achieved this year.
Through innovative performance optimizations, V8 continues to push the boundaries of what's possible in the ever-evolving landscape of the Web. We introduced a new mid-tier compiler and implemented several improvements to the top-tier compiler infrastructure, the runtime and the garbage collector, which have resulted in significant speed gains across the board.
But our commitment to excellence doesn't stop there – we've also prioritized safety. We improved our sandboxing infrastructure and introduced Control-flow Integrity (CFI) to V8, providing a safer environment for users.
Below, we've outlined some key highlights from the year.
Maglev: new mid tier optimizing compiler #
We've introduced a new optimizing compiler named Maglev, strategically positioned between our existing Sparkplug and TurboFan compilers. It functions in-between as a high-speed optimizing compiler, efficiently generating optimized code at an impressive pace. It generates code approximately 20 times slower than our baseline non-optimizing compiler Sparkplug, but 10 to 100 times faster than the top-tier TurboFan. We've observed significant performance improvements with Maglev, with JetStream improving by 8.2% and Speedometer by 6%. Maglev's faster compilation speed and reduced reliance on TurboFan resulted in a 10% energy savings in V8's overall consumption during Speedometer runs. While not fully complete, Maglev's current state justifies its launch in Chrome 117. More details in our blog post.
Turboshaft: new architecture for the top-tier optimizing compiler #
Maglev wasn't our only investment in improved compiler technology. We've also introduced Turboshaft, a new internal architecture for our top-tier optimizing compiler Turbofan, making it both easier to extend with new optimizations and faster at compiling. Since Chrome 120, the CPU-agnostic backend phases all use Turboshaft rather than Turbofan, and compile about twice as fast as before. This is saving energy and is paving the way for more exciting performance gains next year and beyond. Keep an eye out for updates!
Faster HTML parser #
We observed a significant portion of our benchmark time being consumed by HTML parsing. While not a direct enhancement to V8, we took initiative and applied our expertise in performance optimization to add a faster HTML parser to Blink. These changes resulted in a notable 3.4% increase in Speedometer scores. The impact on Chrome was so positive that the WebKit project promptly integrated these changes into their repository. We take pride in contributing to the collective goal of achieving a faster Web!
Faster DOM allocations #
We have also been actively investing to the DOM side. Significant optimizations have been applied to the memory allocation strategies in Oilpan - the allocator for the DOM objects. It has gained a page pool, which notably reduced the cost of the round-trips to the kernel. Oilpan now supports both compressed and uncompressed pointers, and we avoid compressing high-traffic fields in Blink. Given how frequently decompression is performed, it had a wide spread impact on performance. In addition, knowing how fast the allocator is, we oilpanized frequently-allocated classes, which made allocation workloads 3x faster and showed significant improvement on DOM-heavy benchmarks such as Speedometer.
v flag (a.k.a. Unicode set notation),
JSON.parse with source, Array grouping,
Array.fromAsync. Unfortunately, we had to unship iterator helpers after discovering a web incompatibility, but we've worked with TC39 to fix the issue and will reship soon. Finally, we also made ES6+ JS code faster by eliding some redundant temporal dead zone checks for
WebAssembly updates #
Many new features and performance improvements landed for Wasm this year. We enabled support for multi-memory, tail-calls (see our blog post for more details), and relaxed SIMD to unleash next-level performance. We finished implementing memory64 for your memory-hungry applications and are just waiting for the proposal to reach phase 4 so we can ship it! We made sure to incorporate the latest updates to the exception-handling proposal while still supporting the previous format. And we kept investing in JSPI for enabling another big class of applications on the web. Stay tuned for next year!
WebAssembly Garbage Collection #
On the security side, our three main topics for the year were sandboxing, fuzzing, and CFI. On the sandboxing side we focused on building the missing infrastructure such as the code- and trusted pointer table. On the fuzzing side we invested into everything from fuzzing infrastructure to special purpose fuzzers and better language coverage. Some of our work was covered in this presentation. Finally, on the CFI-side we laid the foundation for our CFI architecture so that it can be realized on as many platforms as possible. Besides these, some smaller but noteworthy efforts include work on mitigating a popular exploit technique around `the_hole`` and the launch of a new exploit bounty program in the form of the V8CTF.
Throughout the year, we dedicated efforts to numerous incremental performance enhancements. The combined impact of these small projects, along with the ones detailed in the blog post, is substantial! Below are benchmark scores illustrating V8’s performance improvements achieved in 2023, with an overall growth of
14% for JetStream and an impressive
34% for Speedometer.
From all of us at V8, we wish you a joyous holiday season filled with fast, safe and fabulous experiences as you navigate the Web!