Skip to main content

Overview

Backtesting evaluates model accuracy by testing predictions against historical data. CryptoView Pro provides a comprehensive backtesting framework with advanced metrics to validate model performance before live deployment.

Backtesting Workflow

1

Data Split

Historical data is split into training and testing sets using temporal ordering:
# 80% training, 20% testing (most recent data)
train_size = 0.8
split_idx = int(len(df) * train_size)

train_data = df.iloc[:split_idx]
test_data = df.iloc[split_idx:]
2

Model Training

Model is trained only on the training set:
from models.xgboost_model import XGBoostCryptoPredictor

predictor = XGBoostCryptoPredictor()
metrics = predictor.train(df, train_size=0.8)
3

Prediction

Generate predictions for the test period:
# Prepare test data
X_train, X_test, y_train, y_test = predictor.prepare_data(df, 0.8)

# Predict on test set
test_predictions = predictor.model.predict(X_test)
4

Evaluation

Compare predictions against actual values:
from utils.backtesting import Backtester

backtester = Backtester()
metrics = backtester.calculate_metrics(y_test, test_predictions)

Performance Metrics

The backtesting system calculates multiple metrics to evaluate model performance:

Error Metrics

Average absolute difference between predictions and actual values.
mae = np.mean(np.abs(actual - predicted))
Interpretation: Lower is better. Shows average prediction error in dollars.Example: MAE = 250meanspredictionsareoffby250 means predictions are off by 250 on average.
Square root of average squared errors. Penalizes large errors more heavily.
rmse = np.sqrt(np.mean((actual - predicted) ** 2))
Interpretation: Lower is better. Always >= MAE. Large difference between RMSE and MAE indicates inconsistent errors.Example: RMSE = 380,MAE=380, MAE = 250 suggests some large outlier errors.
Average percentage error, scale-independent metric.
mape = np.mean(np.abs((actual - predicted) / actual)) * 100
Interpretation: Percentage error regardless of price level. <5% is excellent, <10% is good.Example: MAPE = 2.3% means predictions are typically within 2.3% of actual price.
Median absolute percentage error, robust to outliers.
median_ape = np.median(np.abs((actual - predicted) / actual)) * 100
Interpretation: Less sensitive to extreme errors than MAPE.

Accuracy Metrics

Proportion of variance explained by the model.
from sklearn.metrics import r2_score

r2 = r2_score(actual, predicted)
Interpretation:
  • 1.0 = Perfect predictions
  • 0.8-0.95 = Excellent
  • 0.6-0.8 = Good
  • <0.6 = Poor
  • <0 = Worse than baseline
Example: R² = 0.92 means model explains 92% of price variance.
Percentage of correct directional predictions (up/down).
actual_direction = np.sign(np.diff(actual))
pred_direction = np.sign(np.diff(predicted))
direction_accuracy = np.mean(actual_direction == pred_direction) * 100
Interpretation: >55% is profitable for trading, >65% is excellent.Example: 72% direction accuracy means correct trend prediction in 72% of cases.
Largest single prediction error.
max_error = np.max(np.abs(actual - predicted))
Interpretation: Identifies worst-case scenarios and outliers.

Complete Backtesting Example

import pandas as pd
import numpy as np
from models.xgboost_model import XGBoostCryptoPredictor, backtest_model
from utils.backtesting import Backtester
import plotly.graph_objects as go

# Load historical data
from data.collectors import CryptoDataCollector

collector = CryptoDataCollector('kraken')
df = collector.fetch_ohlcv('BTC/USDT', '1h', limit=2000)

# Initialize predictor
predictor = XGBoostCryptoPredictor(
    n_estimators=200,
    learning_rate=0.07,
    max_depth=6
)

# Run comprehensive backtest
backtest_results = backtest_model(df, predictor, train_size=0.8)

# Extract results
metrics = backtest_results['metrics']
test_actual = backtest_results['test_actual']
test_predicted = backtest_results['test_predicted']

# Display metrics
print("=== BACKTEST RESULTS ===")
print(f"\nError Metrics:")
print(f"  MAE:  ${metrics['test_mae']:,.2f}")
print(f"  RMSE: ${metrics['test_rmse']:,.2f}")
print(f"  MAPE: {metrics['test_mape']:.2f}%")

print(f"\nAccuracy Metrics:")
print(f"  Direction Accuracy: {metrics['test_direction_accuracy']:.2f}%")

print(f"\nTraining vs Test:")
print(f"  Train MAPE: {metrics['train_mape']:.2f}%")
print(f"  Test MAPE:  {metrics['test_mape']:.2f}%")
print(f"  Overfitting: {metrics['train_mape'] - metrics['test_mape']:+.2f}%")

# Create visualization
fig = go.Figure()

fig.add_trace(go.Scatter(
    x=list(range(len(test_actual))),
    y=test_actual,
    mode='lines',
    name='Actual',
    line=dict(color='cyan', width=2)
))

fig.add_trace(go.Scatter(
    x=list(range(len(test_predicted))),
    y=test_predicted,
    mode='lines',
    name='Predicted',
    line=dict(color='orange', width=2, dash='dash')
))

# Add error band
error = np.abs(test_actual - test_predicted)
fig.add_trace(go.Scatter(
    x=list(range(len(error))),
    y=error,
    mode='lines',
    name='Absolute Error',
    line=dict(color='red', width=1),
    yaxis='y2'
))

fig.update_layout(
    title='Backtest Results: Actual vs Predicted',
    xaxis_title='Time Period',
    yaxis_title='Price (USDT)',
    yaxis2=dict(title='Error ($)', overlaying='y', side='right'),
    height=600,
    template='plotly_dark'
)

fig.show()

Model-Specific Backtesting

XGBoost Backtest

from models.xgboost_model import XGBoostCryptoPredictor, backtest_model

# Initialize model
predictor = XGBoostCryptoPredictor()

# Run backtest with custom split
results = backtest_model(df, predictor, train_size=0.75)  # 75% train, 25% test

# Analyze feature importance
feature_importance = results['feature_importance']
print("\nTop 10 Most Important Features:")
print(feature_importance.head(10))

# Features are ranked by their contribution to predictions
# High importance = model relies heavily on this feature

Prophet Backtest

from models.prophet_model import ProphetCryptoPredictor, backtest_prophet

# Initialize Prophet
predictor = ProphetCryptoPredictor()

# Prophet-specific backtest (uses cross-validation)
results = backtest_prophet(
    df,
    predictor,
    test_periods=168  # Test on last 168 hours (7 days)
)

print(f"Prophet MAPE: {results['metrics']['MAPE']:.2f}%")
print(f"Prophet RMSE: ${results['metrics']['RMSE']:,.2f}")

# Prophet includes uncertainty intervals
if 'predictions' in results:
    preds = results['predictions']
    print(f"\nAverage Uncertainty Band: {preds['uncertainty_width'].mean():.2f}%")

Hybrid Model Backtest

from models.hybrid_model import HybridCryptoPredictor

# Train hybrid model
predictor = HybridCryptoPredictor()
training_info = predictor.train(df)

# Compare both models
print("XGBoost Performance:")
for metric, value in training_info['xgboost'].items():
    if 'test' in metric:
        print(f"  {metric}: {value:.2f}")

print("\nProphet Performance:")
for metric, value in training_info['prophet'].items():
    print(f"  {metric}: {value:.2f}")

# Test predictions at different horizons
for hours in [24, 72, 168, 720]:
    predictions = predictor.predict_future(df, periods=hours)
    recommended = predictions.get('recommended', 'unknown')
    print(f"\n{hours}h prediction: Using {recommended.upper()} model")

Walk-Forward Analysis

More robust backtesting using rolling windows:
import pandas as pd
import numpy as np
from models.xgboost_model import XGBoostCryptoPredictor

def walk_forward_backtest(df: pd.DataFrame, 
                          window_size: int = 1000,
                          step_size: int = 24,
                          prediction_horizon: int = 24) -> dict:
    """
    Walk-forward backtesting with rolling windows
    
    Args:
        df: Historical data
        window_size: Training window size
        step_size: How many periods to move forward each iteration
        prediction_horizon: How far ahead to predict
    """
    predictions = []
    actuals = []
    
    for i in range(window_size, len(df) - prediction_horizon, step_size):
        # Training window
        train_data = df.iloc[i-window_size:i]
        
        # Train model
        predictor = XGBoostCryptoPredictor()
        predictor.train(train_data, train_size=1.0)  # Use all training data
        
        # Predict
        future = predictor.predict_future(train_data, periods=prediction_horizon)
        
        # Actual values
        actual = df['close'].iloc[i:i+prediction_horizon].values
        pred = future['predicted_price'].values
        
        predictions.extend(pred)
        actuals.extend(actual)
    
    # Calculate metrics
    predictions = np.array(predictions)
    actuals = np.array(actuals)
    
    from utils.backtesting import Backtester
    backtester = Backtester()
    metrics = backtester.calculate_metrics(actuals, predictions)
    
    return {
        'metrics': metrics,
        'predictions': predictions,
        'actuals': actuals,
        'n_windows': (len(df) - window_size) // step_size
    }

# Run walk-forward analysis
results = walk_forward_backtest(
    df,
    window_size=1000,  # Train on 1000 hours
    step_size=24,      # Move forward 24 hours each iteration
    prediction_horizon=24  # Predict 24 hours ahead
)

print(f"Walk-Forward Results ({results['n_windows']} windows):")
print(f"  Average MAPE: {results['metrics']['MAPE']:.2f}%")
print(f"  Direction Accuracy: {results['metrics']['Direction_Accuracy']:.2f}%")

Interpreting Results

Good Model Performance

Benchmarks for Crypto Prediction

  • MAPE: <5% excellent, <10% good
  • Direction Accuracy: >55% profitable, >65% excellent
  • R² Score: >0.7 good, >0.85 excellent
  • Train/Test Gap: <5% difference indicates no overfitting

Red Flags

Signs of Overfitting:
  • Train MAPE ≪ Test MAPE (large gap)
  • Train accuracy >95% but test accuracy <60%
  • Model performs well in backtest but fails in production
Solutions:
  • Reduce model complexity (lower max_depth)
  • Increase regularization
  • Use more training data
  • Simplify feature set

Example Analysis

def analyze_backtest_results(metrics: dict) -> str:
    """
    Provide interpretation of backtest results
    """
    test_mape = metrics['test_mape']
    direction_acc = metrics['test_direction_accuracy']
    train_test_gap = abs(metrics['train_mape'] - metrics['test_mape'])
    
    report = []
    
    # MAPE assessment
    if test_mape < 3:
        report.append("✅ Excellent prediction accuracy")
    elif test_mape < 7:
        report.append("✓ Good prediction accuracy")
    else:
        report.append("⚠️ Moderate accuracy - consider model improvements")
    
    # Direction accuracy
    if direction_acc > 65:
        report.append("✅ Excellent directional predictions")
    elif direction_acc > 55:
        report.append("✓ Profitable directional accuracy")
    else:
        report.append("❌ Insufficient directional accuracy for trading")
    
    # Overfitting check
    if train_test_gap < 3:
        report.append("✅ No overfitting detected")
    elif train_test_gap < 7:
        report.append("⚠️ Slight overfitting")
    else:
        report.append("❌ Significant overfitting - retrain with regularization")
    
    return "\n".join(report)

# Example usage
results = backtest_model(df, predictor, train_size=0.8)
print(analyze_backtest_results(results['metrics']))

Backtester Class Reference

The Backtester class provides comprehensive metrics calculation:
from utils.backtesting import Backtester

backtester = Backtester()

# Calculate all metrics
metrics = backtester.calculate_metrics(
    actual=test_actual,
    predicted=test_predicted
)

# Format for display
formatted = backtester.format_metrics(metrics)
print(formatted)

# Available metrics:
# - MAE, RMSE, MAPE, Median_APE
# - R2_Score, Direction_Accuracy, Max_Error
# - Mean_Actual, Mean_Predicted
# - Std_Actual, Std_Predicted
View implementation at utils/backtesting.py:15

Visualization

Create comprehensive backtest charts:
import plotly.graph_objects as go
from plotly.subplots import make_subplots

def create_comprehensive_backtest_chart(actual, predicted):
    """
    Multi-panel backtest visualization
    """
    fig = make_subplots(
        rows=3, cols=1,
        shared_xaxes=True,
        subplot_titles=(
            'Actual vs Predicted',
            'Absolute Error',
            'Cumulative Error'
        ),
        vertical_spacing=0.1,
        row_heights=[0.5, 0.25, 0.25]
    )
    
    x = list(range(len(actual)))
    
    # Panel 1: Predictions
    fig.add_trace(go.Scatter(
        x=x, y=actual, name='Actual',
        line=dict(color='cyan', width=2)
    ), row=1, col=1)
    fig.add_trace(go.Scatter(
        x=x, y=predicted, name='Predicted',
        line=dict(color='orange', width=2, dash='dash')
    ), row=1, col=1)
    
    # Panel 2: Absolute Error
    error = np.abs(actual - predicted)
    fig.add_trace(go.Scatter(
        x=x, y=error, name='Error',
        line=dict(color='red', width=1),
        fill='tozeroy', fillcolor='rgba(255,0,0,0.2)'
    ), row=2, col=1)
    
    # Panel 3: Cumulative Error
    cumulative_error = np.cumsum(actual - predicted)
    fig.add_trace(go.Scatter(
        x=x, y=cumulative_error, name='Cumulative Error',
        line=dict(color='purple', width=2)
    ), row=3, col=1)
    fig.add_hline(y=0, line_dash="dash", line_color="gray", row=3, col=1)
    
    fig.update_layout(height=800, template='plotly_dark')
    return fig
View implementation at app.py:478

Best Practices

Sufficient Data

Use at least 1000 data points, preferably 2000+

Temporal Split

Never shuffle data - maintain temporal order

Multiple Horizons

Test model at various prediction horizons

Walk-Forward

Use rolling windows for robust validation

Multiple Metrics

Don’t rely on a single metric

Realistic Testing

Include transaction costs in trading simulations

Code Reference

Key files for backtesting:
  • utils/backtesting.py:15 - Metrics calculation
  • utils/backtesting.py:60 - Metrics formatting
  • models/xgboost_model.py:323 - XGBoost backtest function
  • models/prophet_model.py - Prophet backtest function
  • app.py:478 - Backtest visualization
Backtesting shows past performance, not future results. Markets change - regularly retrain and re-validate models.

Build docs developers (and LLMs) love