Metrics class template computes per-class and aggregate classification metrics from a confusion matrix. It calculates precision, recall, F1 score, IoU, and provides both macro and micro averaging across classes.
Template parameters
Confusion matrix type (typically
ConfusionMatrix<T, Label>). Must provide:value_typetype aliasnum_classes()methodoperator[](size_t)for row access
Type aliases
Constructor
Reference to a confusion matrix. The matrix must remain valid for the lifetime of the Metrics object.
Basic count methods
tp
k.
Class index (0-based)
Number of samples correctly predicted as class
kfp
k.
Class index (0-based)
Number of samples incorrectly predicted as class
kfn
k.
Class index (0-based)
Number of class
k samples predicted as other classesPer-class metrics
precision
k: TP / (TP + FP). Returns 0.0 if denominator is zero.
Class index (0-based)
Precision score in range [0.0, 1.0]
recall
k: TP / (TP + FN). Returns 0.0 if denominator is zero.
Class index (0-based)
Recall score in range [0.0, 1.0]
f1
k: 2 * (precision * recall) / (precision + recall). Returns 0.0 if denominator is zero.
Class index (0-based)
F1 score in range [0.0, 1.0]
iou
k: TP / (TP + FP + FN). Returns 0.0 if denominator is zero.
Class index (0-based)
IoU score in range [0.0, 1.0]
Macro-averaged metrics
Macro averaging computes the metric independently for each class, then takes the unweighted mean. This gives equal weight to all classes regardless of support.macro_precision
Mean of per-class precision scores
macro_recall
Mean of per-class recall scores
macro_f1
Mean of per-class F1 scores
mean_iou
Mean of per-class IoU scores
Micro-averaged metrics
Micro averaging aggregates the contributions of all classes to compute the average metric. This gives equal weight to all samples.micro_precision
Micro-averaged precision score
micro_recall
Micro-averaged recall score
micro_f1
Micro-averaged F1 score