Skip to main content

What is AIM?

Aggregated Internet Measurement (AIM) is a Cloudflare framework that translates raw network metrics — download speed, upload speed, latency, latency under load, and packet loss — into a quality score for a specific use case such as video streaming, online gaming, or real-time communications. Each use case weights the underlying metrics differently. A score is computed by summing weighted metric contributions and mapping the result to a five-level quality classification.

getScores()

getScores(): {
  [useCase: string]: {
    points: number;
    classificationIdx: 0 | 1 | 2 | 3 | 4;
    classificationName: 'bad' | 'poor' | 'average' | 'good' | 'great';
  }
}
Returns an object keyed by use case name. Each value contains a numeric score and a quality classification derived from that score.
getScores() only produces meaningful results after isFinished is true. The engine must complete all configured measurements before the scores reflect the full connection quality. Call getScores() inside onFinish to guarantee complete data.
[useCase]
object
One entry per recognized use case (for example, streaming, gaming, rtc).

Classifications

IndexNameMeaning
0badThe connection is unsuitable for this use case.
1poorThe connection will likely struggle with this use case.
2averageAcceptable performance with possible degradation.
3goodReliable performance for this use case.
4greatExcellent performance with headroom to spare.

Contributing metrics

AIM scores are derived from the following measurements, weighted per use case:
  • Download bandwidth — raw throughput available to receive data
  • Upload bandwidth — raw throughput available to send data
  • Unloaded latency — round-trip time at idle
  • Loaded latency increase — how much latency degrades under download or upload load (bufferbloat indicator); calculated as max(downLoadedLatency, upLoadedLatency) − latency
  • Packet loss — ratio of UDP packets that did not complete the round trip
Not all metrics are required for every use case. When a metric is absent (for example, because packetLoss was not included in the measurement sequence), its contribution defaults to 0 and the score is calculated from the remaining metrics.

Requirements

  • All measurements in the configured sequence must complete before calling getScores() for final results — check isFinished or use the onFinish callback.
  • Loaded latency scores are only meaningful when measureDownloadLoadedLatency and measureUploadLoadedLatency are true (both default to true).
  • Packet loss scoring requires a configured and reachable TURN server. See Packet loss for setup instructions.

Usage example

import SpeedTest from '@cloudflare/speedtest';

const engine = new SpeedTest();

engine.onFinish = results => {
  const scores = results.getScores();

  console.log(scores);
  // {
  //   streaming: { points: 87, classificationIdx: 3, classificationName: 'good' },
  //   gaming:    { points: 72, classificationIdx: 3, classificationName: 'good' },
  //   rtc:       { points: 61, classificationIdx: 2, classificationName: 'average' },
  // }

  for (const [useCase, score] of Object.entries(scores)) {
    console.log(`${useCase}: ${score.classificationName} (${score.points} pts)`);
  }
};

Checking a specific use case

engine.onFinish = results => {
  const scores = results.getScores();
  const streaming = scores.streaming;

  if (!streaming) {
    console.log('Streaming score not available');
    return;
  }

  if (streaming.classificationIdx >= 3) {
    console.log('Connection is good enough for HD streaming.');
  } else {
    console.log(`Streaming quality is ${streaming.classificationName} — consider reducing resolution.`);
  }
};

Guarding against incomplete results

const engine = new SpeedTest({ autoStart: false });

engine.onResultsChange = () => {
  // getScores() mid-run returns partial scores — use with caution
  if (engine.results.isFinished) {
    const scores = engine.results.getScores();
    renderScores(scores);
  }
};

engine.play();
For UI display, prefer classificationName as the primary label and use classificationIdx for programmatic comparisons (for example, score.classificationIdx >= 3 to check for good or better quality).

Further reading

Build docs developers (and LLMs) love