Function Signature
Description
Theprint_metrics() function displays model evaluation metrics in a clean, formatted table. It automatically detects potential overfitting or underfitting issues and provides visual indicators.
Parameters
A dictionary containing model evaluation metrics, typically returned by the
evaluate_model() function.Required keys:model_name(str): Name of the modeltrain_mse(float): Training Mean Squared Errortest_mse(float): Test Mean Squared Errortrain_rmse(float): Training Root Mean Squared Errortest_rmse(float): Test Root Mean Squared Errortrain_mae(float): Training Mean Absolute Errortest_mae(float): Test Mean Absolute Errortrain_r2(float): Training R² scoretest_r2(float): Test R² scorecv_r2_mean(float): Cross-validation R² meancv_r2_std(float): Cross-validation R² standard deviation
Return Value
This function does not return a value. It prints formatted output to the console.
Output Format
The function prints a formatted table showing:- Model Name Header - Clearly identifies the model being evaluated
- Performance Metrics - Side-by-side comparison of training vs test metrics:
- MSE (Mean Squared Error)
- RMSE (Root Mean Squared Error)
- MAE (Mean Absolute Error)
- R² (Coefficient of Determination)
- Cross-Validation Results - Mean ± standard deviation of CV R² scores
- Model Fit Analysis - Automatic detection of overfitting or underfitting
Example Output
Overfitting and Underfitting Detection
The function automatically analyzes the metrics and provides warnings:Usage Example
Use Cases
- Quick Model Assessment - Instantly see if your model is performing well
- Debugging - Identify overfitting/underfitting issues during development
- Model Comparison - Print metrics for multiple models to compare visually
- Reporting - Generate formatted output for documentation or reports
Integration with Workflow
Related Functions
evaluate_model()- Generate the metrics dictionary used by this functioncompare_models()- Compare multiple models side-by-side in a table