Overview
The Dart VM supports multiple compilation strategies, each optimized for different use cases:- JIT (Just-in-Time) - Compiles code during execution with adaptive optimization
- AOT (Ahead-of-Time) - Pre-compiles code to machine code before execution
- Snapshots - Serialized heap state for fast startup
The main difference between compilation modes is when and how the VM converts Dart source to executable code. The runtime environment remains the same.
Just-in-Time (JIT) Compilation
JIT compilation happens dynamically as the program runs, enabling adaptive optimization based on actual execution patterns.Unoptimizing Compiler
When a function is first called, it’s compiled by the unoptimizing compiler for fast code generation:- CFG Generation: Walk the Kernel AST to build a Control Flow Graph with Intermediate Language (IL) instructions
- Code Generation: Directly lower IL to machine code using one-to-many instruction mapping
The unoptimizing compiler prioritizes speed over optimization. It produces executable code quickly without any optimizations.
Lazy Compilation
Functions start with a placeholder pointing toLazyCompileStub:
Inline Caching
Since unoptimized code doesn’t resolve calls statically, the VM uses inline caching for dynamic dispatch:- Receiver class IDs
- Target methods
- Invocation frequency counters (for optimization decisions)
Adaptive Optimizing Compilation
As code runs, the VM collects execution profiles:- Type feedback from inline caches
- Execution counters for functions and basic blocks
- Build unoptimized IL from Kernel AST
- Convert to SSA (Static Single Assignment) form
- Apply optimizations:
- Speculative specialization based on type feedback
- Inlining
- Range analysis
- Type propagation
- Representation selection
- Global value numbering
- Allocation sinking
- Lower to machine code with linear scan register allocation
Deoptimization
Optimized code makes speculative assumptions that may be violated:- Transfers execution to the matching point in unoptimized code
- Unoptimized code handles all cases correctly
- Function is eventually reoptimized with updated type feedback
- Inline checks (eager deoptimization) -
CheckSmi,CheckClassinstructions before operations - Global guards (lazy deoptimization) - Runtime discards optimized code when global assumptions change
Deoptimization points use deopt IDs to match optimized positions to unoptimized code locations, ensuring correct resumption after side effects.
On-Stack Replacement (OSR)
For long-running loops, the VM can switch from unoptimized to optimized code while the function is executing:- Stack frame for unoptimized version is transparently replaced
- Execution continues in optimized code
- Critical for hot loops in long-running applications
Ahead-of-Time (AOT) Compilation
AOT compilation produces machine code before execution, enabling:- Platforms without JIT (iOS, embedded systems)
- Fast startup with consistent performance
- Reduced runtime footprint (no compiler in deployed app)
JIT achieves better peak performance with warmup time, while AOT provides immediate peak performance with no warmup.
AOT Requirements
Since AOT can’t compile at runtime:- Complete code coverage - Every reachable function must be pre-compiled
- No speculation - Can’t rely on assumptions that might be violated
Type Flow Analysis (TFA)
AOT uses global static analysis to:- Determine which code is reachable from entry points
- Track which classes are instantiated
- Analyze how types flow through the program
- Devirtualize calls based on proven type information
TFA is conservative - it errs on the side of correctness, unlike JIT which can speculate and deoptimize.
Precompiled Runtime
AOT snapshots run on a stripped-down VM:- No JIT compiler components
- No dynamic code loading
- Smaller binary size
- Reduced memory footprint
AppJIT Snapshots
AppJIT combines JIT compilation with snapshot serialization:- Training run - Execute app with mock data, collect JIT-compiled code
- Snapshot - Serialize compiled code and VM state
- Deployment - Distribute snapshot instead of source
- Runtime - Fast startup from pre-compiled code, can still JIT if needed
dartanalyzer and dart2js that spend significant time in JIT warmup.
Compilation Mode Comparison
| Mode | Compile Time | Startup | Peak Performance | Warmup | Use Case |
|---|---|---|---|---|---|
| JIT | During execution | Fast | Best | Yes | Development, hot-reload |
| AOT | Before execution | Fastest | Good | None | Production, mobile apps |
| AppJIT | Training run | Very fast | Good | Minimal | CLI tools, servers |
Debugging Compilation
JIT Flags
| Flag | Description |
|---|---|
--print-flow-graph[-optimized] | Print IL for all/optimized compilations |
--disassemble[-optimized] | Disassemble all/optimized functions |
--print-flow-graph-filter=xyz | Filter output to specific functions |
--compiler-passes=... | Control compiler passes |
--no-background-compilation | Compile on main thread |
--trace-deoptimization | Show deoptimization events |
AOT Flags
Summary
Dart’s flexible compilation system provides:- JIT for fast development with adaptive optimization
- AOT for production deployment with guaranteed performance
- AppJIT for tools needing fast startup with JIT capabilities
- Sophisticated optimization based on type feedback and static analysis
- Deoptimization safety allowing speculative optimizations