Skip to main content

Overview

Node.js provides comprehensive performance measurement and profiling tools through the perf_hooks module, V8 Inspector, and command-line options.

Performance Hooks API

The node:perf_hooks module implements W3C Web Performance APIs plus Node.js-specific measurements.

Basic Performance Measurement

import { performance, PerformanceObserver } from 'node:perf_hooks';

const obs = new PerformanceObserver((items) => {
  console.log(items.getEntries()[0].duration);
  performance.clearMarks();
});
obs.observe({ type: 'measure' });

performance.mark('Start');
doSomeLongRunningProcess();
performance.mark('End');
performance.measure('Process Duration', 'Start', 'End');

Creating Performance Marks

// Create marks at specific points
performance.mark('request-start');

// Process request
await handleRequest();

performance.mark('request-end');

// Measure duration between marks
performance.measure(
  'request-duration',
  'request-start',
  'request-end'
);

// Get the measurement
const measurements = performance.getEntriesByName('request-duration');
console.log(`Request took ${measurements[0].duration}ms`);

Performance Observer

Observe performance entries as they’re recorded:
import { PerformanceObserver, performance } from 'node:perf_hooks';

const obs = new PerformanceObserver((list) => {
  const entries = list.getEntries();
  entries.forEach((entry) => {
    console.log(`${entry.name}: ${entry.duration}ms`);
  });
});

// Observe different entry types
obs.observe({ 
  entryTypes: ['measure', 'function'],
  buffered: true  // Include past entries
});

performance.mark('start');
await doWork();
performance.measure('work', 'start');

CPU Profiling

Using —prof Flag

Generate V8 CPU profiles:
1

Run with profiling enabled

node --prof app.js
This creates an isolate-0x*.log file in the current directory.
2

Process the log file

node --prof-process isolate-0x*.log > profile.txt
3

Analyze the output

Open profile.txt to see CPU time spent in each function:
[Summary]:
   ticks  total  nonlib   name
   2000   40.0%   40.0%  JavaScript
   1500   30.0%   30.0%  C++
   1500   30.0%   30.0%  GC

Programmatic CPU Profiling

import { Session } from 'node:inspector/promises';
import fs from 'node:fs';

const session = new Session();
session.connect();

// Start profiling
await session.post('Profiler.enable');
await session.post('Profiler.start');

// Run code to profile
await performExpensiveOperation();

// Stop and save profile
const { profile } = await session.post('Profiler.stop');
fs.writeFileSync('./profile.cpuprofile', JSON.stringify(profile));

session.disconnect();
Open .cpuprofile files in Chrome DevTools (Performance tab → Load Profile) for interactive analysis.

Memory Profiling

Heap Snapshots

Capture memory state at a specific point:
import { Session } from 'node:inspector/promises';
import fs from 'node:fs';

const session = new Session();
const fd = fs.openSync('heap.heapsnapshot', 'w');

session.connect();

session.on('HeapProfiler.addHeapSnapshotChunk', (m) => {
  fs.writeSync(fd, m.params.chunk);
});

await session.post('HeapProfiler.takeHeapSnapshot', null);
session.disconnect();
fs.closeSync(fd);

Using v8.writeHeapSnapshot()

Simpler heap snapshot API:
import v8 from 'node:v8';

// Take snapshot at any point
const filename = v8.writeHeapSnapshot();
console.log(`Heap snapshot written to ${filename}`);

// With custom filename
v8.writeHeapSnapshot('./snapshots/heap-snapshot.heapsnapshot');

Analyzing Memory Leaks

1

Take initial snapshot

v8.writeHeapSnapshot('./snapshot-1.heapsnapshot');
2

Perform operations

// Run the code suspected of leaking
for (let i = 0; i < 1000; i++) {
  processData();
}
3

Take second snapshot

v8.writeHeapSnapshot('./snapshot-2.heapsnapshot');
4

Compare in DevTools

  1. Open Chrome DevTools
  2. Go to Memory tab
  3. Load both snapshots
  4. Select “Comparison” view
  5. Look for objects that increased

Event Loop Monitoring

Event Loop Utilization

Measure how busy the event loop is:
import { performance } from 'node:perf_hooks';

const elu1 = performance.eventLoopUtilization();

// Do some work
await doWork();

const elu2 = performance.eventLoopUtilization();
const utilization = performance.eventLoopUtilization(elu2, elu1);

console.log(utilization);
// {
//   idle: 1000,
//   active: 500,
//   utilization: 0.3333
// }

Event Loop Delay

Monitor event loop lag:
import { monitorEventLoopDelay } from 'node:perf_hooks';

const h = monitorEventLoopDelay({ resolution: 20 });
h.enable();

setInterval(() => {
  console.log({
    min: h.min,
    max: h.max,
    mean: h.mean,
    stddev: h.stddev,
    percentiles: {
      p50: h.percentile(50),
      p99: h.percentile(99),
    }
  });
}, 1000);

Resource Timing

Track resource loading performance:
import { PerformanceObserver } from 'node:perf_hooks';

const obs = new PerformanceObserver((list) => {
  const entries = list.getEntries();
  entries.forEach((entry) => {
    console.log(`${entry.name}: ${entry.duration}ms`);
  });
});

obs.observe({ type: 'resource' });

Performance Best Practices

Avoid blocking the event loop:
// Bad - blocks event loop
function processLargeFile() {
  const data = fs.readFileSync('large.txt');
  return parse(data);
}

// Good - non-blocking
async function processLargeFile() {
  const data = await fs.promises.readFile('large.txt');
  return parse(data);
}
// For large objects, consider streaming
import { pipeline } from 'node:stream/promises';
import JSONStream from 'JSONStream';

await pipeline(
  fs.createReadStream('large.json'),
  JSONStream.parse('items.*'),
  processItems
);
import { Worker } from 'node:worker_threads';

function runWorker(data) {
  return new Promise((resolve, reject) => {
    const worker = new Worker('./worker.js', {
      workerData: data
    });
    worker.on('message', resolve);
    worker.on('error', reject);
  });
}

const result = await runWorker({ task: 'heavy-computation' });
const cache = new Map();

async function fetchData(key) {
  if (cache.has(key)) {
    return cache.get(key);
  }

  const data = await expensiveOperation(key);
  cache.set(key, data);
  return data;
}
import { pipeline } from 'node:stream/promises';
import { createReadStream, createWriteStream } from 'node:fs';
import { createGzip } from 'node:zlib';

await pipeline(
  createReadStream('input.txt'),
  createGzip(),
  createWriteStream('input.txt.gz')
);

Benchmarking

Simple Benchmarking

import { performance } from 'node:perf_hooks';

function benchmark(fn, iterations = 1000) {
  const start = performance.now();
  
  for (let i = 0; i < iterations; i++) {
    fn();
  }
  
  const end = performance.now();
  const totalTime = end - start;
  const avgTime = totalTime / iterations;
  
  console.log(`Total: ${totalTime.toFixed(2)}ms`);
  console.log(`Average: ${avgTime.toFixed(4)}ms`);
  console.log(`Ops/sec: ${(1000 / avgTime).toFixed(0)}`);
}

benchmark(() => JSON.parse('{"test":"value"}'));

Comparing Implementations

function compare(implementations) {
  const results = [];
  
  for (const [name, fn] of Object.entries(implementations)) {
    const start = performance.now();
    fn();
    const duration = performance.now() - start;
    
    results.push({ name, duration });
  }
  
  results.sort((a, b) => a.duration - b.duration);
  
  console.log('\nResults (fastest first):');
  results.forEach((r, i) => {
    const relative = i === 0 ? 'baseline' : 
                    `${(r.duration / results[0].duration).toFixed(2)}x slower`;
    console.log(`${r.name}: ${r.duration.toFixed(2)}ms (${relative})`);
  });
}

compare({
  'Array.push': () => {
    const arr = [];
    for (let i = 0; i < 10000; i++) arr.push(i);
  },
  'Array literal': () => {
    const arr = [];
    for (let i = 0; i < 10000; i++) arr[i] = i;
  },
  'Array.from': () => {
    Array.from({ length: 10000 }, (_, i) => i);
  }
});

Flame Graphs

Generating Flame Graphs

1

Install tools

git clone https://github.com/brendangregg/FlameGraph
2

Profile with perf

node --perf-basic-prof app.js &
PID=$!
sudo perf record -F 99 -p $PID -g -- sleep 30
sudo perf script > out.perf
3

Generate flame graph

./FlameGraph/stackcollapse-perf.pl out.perf > out.folded
./FlameGraph/flamegraph.pl out.folded > flamegraph.svg

Performance Monitoring in Production

Built-in Diagnostics Report

// Trigger on signal
process.report.writeReport();

// Trigger on uncaught exception
process.report.reportOnUncaughtException = true;

// Trigger on signal
process.report.reportOnSignal = true;
process.report.signal = 'SIGUSR2';

Custom Metrics

import { performance } from 'node:perf_hooks';

class MetricsCollector {
  constructor() {
    this.metrics = new Map();
  }

  record(name, value) {
    if (!this.metrics.has(name)) {
      this.metrics.set(name, []);
    }
    this.metrics.get(name).push({
      value,
      timestamp: Date.now()
    });
  }

  getStats(name) {
    const values = this.metrics.get(name) || [];
    if (values.length === 0) return null;

    const nums = values.map(v => v.value);
    return {
      count: nums.length,
      min: Math.min(...nums),
      max: Math.max(...nums),
      avg: nums.reduce((a, b) => a + b) / nums.length
    };
  }
}

const metrics = new MetricsCollector();

// Record metrics
const start = performance.now();
await handleRequest();
metrics.record('request-duration', performance.now() - start);

// Get statistics
console.log(metrics.getStats('request-duration'));

Optimization Checklist

1

Measure First

Always profile before optimizing. Use --prof or DevTools to identify bottlenecks.
2

Check Event Loop

Monitor event loop utilization and delay. High values indicate blocking operations.
3

Analyze Memory

Take heap snapshots to identify memory leaks and excessive allocations.
4

Optimize Hot Paths

Focus on the code that runs most frequently (revealed by profiling).
5

Validate Changes

Re-measure after each optimization to confirm improvements.

Next Steps

Debugging

Debug performance issues

Testing

Performance testing strategies