Why validation matters
Validation helps you:- Measure performance: Quantify how well your model generalizes to unseen data
- Compare models: Make objective comparisons between different algorithms or hyperparameters
- Detect overfitting: Identify when your model memorizes training data instead of learning patterns
- Handle class imbalance: Use stratified splitting to ensure fair evaluation across all classes
Core components
Confusion matrix
TheConfusionMatrix class tracks prediction counts for multi-class classification problems. Rows represent true labels, columns represent predicted labels.
The confusion matrix stores counts internally and provides efficient access via
operator[] for computing derived metrics.Metrics
TheMetrics class computes precision, recall, F1 score, IoU, and macro/micro averages from any confusion matrix. See the metrics page for details.
Cross-validation
Stratified k-fold cross-validation and ROC analysis help you estimate model performance on held-out data. See the cross-validation page for implementation details.Typical workflow
- Build a confusion matrix: Track predictions during evaluation
- Compute metrics: Calculate precision, recall, F1, and other performance measures
- Use cross-validation: Get robust estimates with stratified k-fold splitting
- Analyze ROC curves: Evaluate threshold-agnostic performance for binary tasks
Template parameters
All validation classes use C++ templates for flexibility:T: Arithmetic type for counts (std::size_t,int,double)Label: Integer-like label type (int,std::size_t,enum)Score: Floating-point type for classifier scores (float,double)