Skip to main content
The Metrics class template computes per-class and aggregate classification metrics from a confusion matrix. It calculates precision, recall, F1 score, IoU, and provides both macro and micro averaging across classes.

Template parameters

CM
typename
Confusion matrix type (typically ConfusionMatrix<T, Label>). Must provide:
  • value_type type alias
  • num_classes() method
  • operator[](size_t) for row access

Type aliases

using T = typename std::remove_cvref_t<CM>::value_type;
The underlying arithmetic type from the confusion matrix.

Constructor

cm
const CM&
Reference to a confusion matrix. The matrix must remain valid for the lifetime of the Metrics object.
explicit Metrics(const CM& cm);

Basic count methods

tp

[[nodiscard]] T tp(std::size_t k) const noexcept;
Returns the true positive count for class k.
k
std::size_t
Class index (0-based)
return
T
Number of samples correctly predicted as class k

fp

[[nodiscard]] T fp(std::size_t k) const noexcept;
Returns the false positive count for class k.
k
std::size_t
Class index (0-based)
return
T
Number of samples incorrectly predicted as class k

fn

[[nodiscard]] T fn(std::size_t k) const noexcept;
Returns the false negative count for class k.
k
std::size_t
Class index (0-based)
return
T
Number of class k samples predicted as other classes

Per-class metrics

precision

[[nodiscard]] double precision(std::size_t k) const noexcept;
Computes precision for class k: TP / (TP + FP). Returns 0.0 if denominator is zero.
k
std::size_t
Class index (0-based)
return
double
Precision score in range [0.0, 1.0]

recall

[[nodiscard]] double recall(std::size_t k) const noexcept;
Computes recall for class k: TP / (TP + FN). Returns 0.0 if denominator is zero.
k
std::size_t
Class index (0-based)
return
double
Recall score in range [0.0, 1.0]

f1

[[nodiscard]] double f1(std::size_t k) const noexcept;
Computes F1 score for class k: 2 * (precision * recall) / (precision + recall). Returns 0.0 if denominator is zero.
k
std::size_t
Class index (0-based)
return
double
F1 score in range [0.0, 1.0]

iou

[[nodiscard]] double iou(std::size_t k) const noexcept;
Computes Intersection over Union (Jaccard index) for class k: TP / (TP + FP + FN). Returns 0.0 if denominator is zero.
k
std::size_t
Class index (0-based)
return
double
IoU score in range [0.0, 1.0]

Macro-averaged metrics

Macro averaging computes the metric independently for each class, then takes the unweighted mean. This gives equal weight to all classes regardless of support.

macro_precision

[[nodiscard]] double macro_precision() const noexcept;
return
double
Mean of per-class precision scores

macro_recall

[[nodiscard]] double macro_recall() const noexcept;
return
double
Mean of per-class recall scores

macro_f1

[[nodiscard]] double macro_f1() const noexcept;
return
double
Mean of per-class F1 scores

mean_iou

[[nodiscard]] double mean_iou() const noexcept;
return
double
Mean of per-class IoU scores

Micro-averaged metrics

Micro averaging aggregates the contributions of all classes to compute the average metric. This gives equal weight to all samples.

micro_precision

[[nodiscard]] double micro_precision() const noexcept;
Computes global TP / (global TP + global FP) across all classes.
return
double
Micro-averaged precision score

micro_recall

[[nodiscard]] double micro_recall() const noexcept;
Computes global TP / (global TP + global FN) across all classes.
return
double
Micro-averaged recall score

micro_f1

[[nodiscard]] double micro_f1() const noexcept;
Computes F1 from micro-averaged precision and recall.
return
double
Micro-averaged F1 score

Example usage

#include <mlpp/model_validation/confusion_matrix.hpp>
#include <mlpp/model_validation/metrics.h>

using namespace mlpp::model_validation;

// Create confusion matrix for 3-class problem
ConfusionMatrix<std::size_t> cm(3);

// Populate with predictions
std::vector<std::size_t> y_true = {0, 0, 1, 1, 2, 2};
std::vector<std::size_t> y_pred = {0, 1, 1, 1, 2, 0};

for (size_t i = 0; i < y_true.size(); ++i) {
    cm.update(y_true[i], y_pred[i]);
}

// Compute metrics
Metrics metrics(cm);

// Per-class metrics
std::cout << "Class 0 precision: " << metrics.precision(0) << std::endl;
std::cout << "Class 1 recall: " << metrics.recall(1) << std::endl;
std::cout << "Class 2 F1: " << metrics.f1(2) << std::endl;

// Macro averages (equal weight per class)
std::cout << "Macro F1: " << metrics.macro_f1() << std::endl;
std::cout << "Mean IoU: " << metrics.mean_iou() << std::endl;

// Micro averages (equal weight per sample)
std::cout << "Micro precision: " << metrics.micro_precision() << std::endl;
std::cout << "Micro F1: " << metrics.micro_f1() << std::endl;

Build docs developers (and LLMs) love