Skip to main content
The getScores() method returns quality scores based on Cloudflare’s Aggregated Internet Measurement (AIM) framework. AIM translates raw network metrics into use-case-specific quality ratings that are directly meaningful to end users.
getScores() requires all measurements to be complete. Call it inside the onFinish callback, or after confirming results.isFinished === true. Calling it mid-test may return an empty object or incomplete scores if the required input metrics are not yet available.

How scores are computed

Each AIM use case (e.g., streaming, gaming) is defined by a set of input metrics and a scoring function. The engine:
  1. Computes a point value for each input metric (download, upload, latency, jitter, packet loss, loaded latency increase).
  2. Sums the points for the use case.
  3. Maps the total to one of five classification bands using configurable thresholds.
The “loaded latency increase” input is derived automatically as the difference between unloaded latency and the maximum of download-loaded or upload-loaded latency. It does not need to be read separately from the Results object.

getScores()

Returns an object keyed by use case name. Each value contains a numeric score and a classification.
import SpeedTest from '@cloudflare/speedtest';

const engine = new SpeedTest();

engine.onFinish = results => {
  const scores = results.getScores();
  console.log(scores);
};

Example output

{
  "streaming": {
    "points": 87.4,
    "classificationIdx": 3,
    "classificationName": "good"
  },
  "gaming": {
    "points": 92.1,
    "classificationIdx": 4,
    "classificationName": "great"
  },
  "rtc": {
    "points": 61.0,
    "classificationIdx": 2,
    "classificationName": "average"
  }
}
A use case entry is only included in the returned object when all of its required input metrics are available. If, for example, packet loss was not measured, use cases that depend on it will be absent from the result.

Score shape

points
number
Aggregate numeric score for the use case. Higher is better. The scale and maximum value depend on the use case definition and the configured aimMeasurementScoring and aimExperiencesDefs options.
classificationIdx
0 | 1 | 2 | 3 | 4
Numeric index representing the quality band. Maps as follows:
IndexName
0bad
1poor
2average
3good
4great
classificationName
'bad' | 'poor' | 'average' | 'good' | 'great'
Human-readable label for the quality band. Equivalent to classificationNames[classificationIdx].

Displaying scores

Use classificationName for UI labels and classificationIdx when mapping to colors or icons programmatically.
engine.onFinish = results => {
  const scores = results.getScores();

  const colorMap = {
    bad:     '#e53e3e',
    poor:    '#ed8936',
    average: '#ecc94b',
    good:    '#48bb78',
    great:   '#38a169',
  };

  Object.entries(scores).forEach(([useCase, { classificationName, points }]) => {
    const color = colorMap[classificationName];
    console.log(`${useCase}: ${classificationName} (${points.toFixed(1)} pts) — ${color}`);
  });
};

Checking score availability

Because scores depend on isFinished, guard against calling getScores() before the test completes:
engine.onResultsChange = () => {
  if (!engine.results.isFinished) return;

  const scores = engine.results.getScores();
  // safe to use scores here
};
Prefer using the onFinish event over polling isFinished inside onResultsChange. onFinish is fired exactly once, immediately after the last measurement completes.

Build docs developers (and LLMs) love