JavaScript engines: the turbocharger for the browser

JavaScript engines: the turbocharger for the browser


Note: We have used commission links in this article and marked them with “*”. If an order is placed via these links, receives a commission.

Modern web browsers rely on modern Javascript engines to ensure that code executes quickly – developers should therefore have a basic understanding of how they work. An overview of the most common Javascript engines.

The basic mission of all Javascript engines is to convert Javascript code into fast, optimized code that browsers and web applications can then interpret. Each browser uses its own specific engine, such as V8 in Google Chrome, Chakra in Microsoft Edge or Spidermonkey in Mozilla Firefox.

The Javascript engine pipeline

The following applies to all of them: Their use begins with the Javascript source code that the developer wrote. The Javascript engine analyzes it and converts it into an Abstract Syntax Tree (AST) – a tree representation of the source code, which is then converted into bytecode. This bytecode is then executed by the bytecode interpreter.

In order to achieve a better execution speed, this bytecode can be sent to an optimizing compiler together with collected profiling data. The optimizing compiler makes certain assumptions on the basis of this profiling data and then generates highly optimized machine code. If any of the assumptions are found to be incorrect at any time, the code is de-optimized and execution returns to the interpreter, which then gathers new profiling data to optimize the code again later.

Now let’s look at the parts of this pipeline that are actually responsible for executing Javascript code. That means: where exactly is code interpreted and optimized? We’ll focus on some of the differences between the most popular Javascript engines. Usually the pipeline always consists of an interpreter and at least one optimizing compiler. The interpreter generates bytecode very quickly, the optimizing compiler takes a little longer, but generates highly optimized machine code.

This generic approach is pretty much the same as that of V8, the Javascript engine in Chrome and Node.js. The interpreter in V8 is called Ignition, the optimizing compiler is called Turbofan. Another popular Javascript engine is Spidermonkey, outsourcing developed by Brendan Eich in the 1990s, making it the very first Javascript engine. Spidermonkey is used in Mozilla Firefox. The pipeline looks different to V8: There is not one, but two optimizing compilers. The interpreter is followed by the baseline compiler, which generates simple machine code. When combined with profiling data collected while the code is running, the Ionmonkey compiler can then create highly optimized code. If any assumptions made turn out to be incorrect, execution returns to the baseline code. Chakra, the Javascript engine in Microsoft Edge and Node-Chakracore, has a very similar architecture, with an interpreter and also two compilers, SimpleJIT and FullJIT. JIT stands for just-in-time compiler. Javascriptcore (or JSC for short), Apple’s Javascript engine as used in Safari and React Native, takes it to the extreme with three different optimizing compilers. LLInt, the low-level interpreter, leads to the baseline compiler, which continues to the DFG compiler. Code that is used very often can then finally be optimized by the FTL compiler.

Compilers help Javascript engines achieve better execution speed: They generate highly optimized machine code. Different engines rely on one, two or even more compilers. (Graphic: Mathias Bynens, Benedikt Meurer)

Why do some engines have more compilers than others? An interpreter can generate bytecode quickly, but bytecode is generally not very efficient. An optimizing compiler, on the other hand, takes longer, but ultimately generates much more efficient machine code. It is therefore important to find the right compromise between generating code quickly (interpreter) or executing the generated code more quickly (optimizing compiler). Some engines choose to add multiple optimizing compilers with different timing and efficiency characteristics, which allows for finer control over these tradeoffs at the cost of additional complexity and compile times.

You might be interested in that too

Ready to see us in action:

More To Explore
Enable registration in settings - general
Have any project in mind?

Contact us: