Skip to main content

Quick Comparison

XGBoost

Best for: 1-72 hours
Strength: High short-term accuracy
Speed: Fast training, slow prediction

Prophet

Best for: 1 week - 1 month
Strength: Trend & seasonality
Speed: Slow training, fast prediction

Hybrid

Best for: All horizons
Strength: Automatic selection
Speed: Slow training, fast prediction

Detailed Comparison Table

Performance Characteristics

FeatureXGBoostProphetHybrid
Optimal Horizon1-72 hours1 week - 1 monthAll (1h - 1 month)
Short-term Accuracy (1-24h)⭐⭐⭐⭐⭐ (Best)⭐⭐ (Poor)⭐⭐⭐⭐⭐ (Best)
Medium-term Accuracy (1-3d)⭐⭐⭐⭐ (Good)⭐⭐⭐ (Fair)⭐⭐⭐⭐⭐ (Best)
Long-term Accuracy (1w+)⭐⭐ (Poor)⭐⭐⭐⭐⭐ (Best)⭐⭐⭐⭐⭐ (Best)
Typical MAPE (24h)2-4%6-10%2-4%
Typical MAPE (7d)8-15%8-12%8-12%
Direction Accuracy60-70% (short)50-58% (long)Varies by horizon

Technical Specifications

AspectXGBoostProphetHybrid
AlgorithmGradient BoostingAdditive DecompositionEnsemble
Training Time5-15 sec10-30 sec15-45 sec
Prediction Time (24h)200-500ms50-100ms50-100ms
Prediction Time (168h)1-3 sec50-100ms50-100ms
Memory UsageMediumLowHigh
Model Size~5-20 MB~1-5 MB~10-25 MB
Min Data Points100100100
Recommended Data1000+1000+1000+

Feature Capabilities

FeatureXGBoostProphetHybrid
Feature Engineering60+ auto featuresNone (just ds/y)60+ (XGBoost)
Technical Indicators✅ RSI, MACD, BB❌ Not supported✅ Via XGBoost
Trend Detection❌ Implicit only✅ Explicit component✅ Via Prophet
Seasonality❌ Via lag features✅ Daily/weekly✅ Via Prophet
Confidence Intervals⚠️ Estimated✅ Native✅ Both
Missing Data❌ Must drop✅ Handles gaps⚠️ Mixed
Interpretability⭐⭐ Feature importance⭐⭐⭐⭐⭐ Clear components⭐⭐⭐ Medium

Model Parameters

XGBoost Configuration

XGBoostCryptoPredictor(
    n_estimators=200,        # Number of trees
    learning_rate=0.07,      # Gradient descent step size
    max_depth=6,             # Tree depth
    subsample=0.8,           # Sample fraction per tree
    colsample_bytree=0.8     # Feature fraction per tree
)
Key Parameters:
  • n_estimators: More trees = better fit, slower training (100-500)
  • learning_rate: Lower = more conservative, needs more trees (0.01-0.3)
  • max_depth: Deeper = more complex patterns, risk overfitting (3-10)

Prophet Configuration

ProphetCryptoPredictor(
    changepoint_prior_scale=0.5,  # Trend flexibility
    seasonality_prior_scale=10,   # Seasonality strength
    interval_width=0.95           # Confidence interval width
)
Key Parameters:
  • changepoint_prior_scale: Higher = more flexible trend (0.001-0.5)
  • seasonality_prior_scale: Higher = stronger seasonality (1-20)
  • interval_width: Confidence level for bounds (0.8-0.99)

Hybrid Configuration

HybridCryptoPredictor()
# No parameters - uses default XGBoost and Prophet configs
# Automatically handles model selection and weighting
Model Selection:
  • ≤24h: XGBoost only (100% weight)
  • 24-72h: Weighted ensemble (gradient from XGBoost → Prophet)
  • 72h: Prophet dominant (or 100% Prophet at 168h)

Prediction Comparison

Typical Accuracy by Horizon

ModelMAPEDirection AccBest Use
XGBoost2-4%60-70%✅ Recommended
Prophet6-10%52-58%❌ Too inaccurate
Hybrid2-4%60-70%✅ Recommended
Winner: XGBoost or Hybrid (uses XGBoost)

Training Comparison

Training Process

predictor = XGBoostCryptoPredictor()

# 1. Creates 60+ features
# 2. Splits train/test (80/20)
# 3. Scales features
# 4. Trains gradient boosting
metrics = predictor.train(df, train_size=0.8)

# Returns: train/test MAE, RMSE, MAPE, direction accuracy

Training Speed Benchmark

Test: 1000 hourly data points, Bitcoin
ModelTraining TimePrediction Time (24h)Prediction Time (168h)
XGBoost~8 seconds~300ms (iterative)~2 seconds
Prophet~18 seconds~60ms (direct)~80ms
Hybrid~26 seconds~60ms~80ms
Prophet is slower to train but much faster for multi-step predictions because it predicts all periods directly (no iteration).

Prediction Speed Comparison

Why XGBoost is Slower for Multi-step

# XGBoost: ITERATIVE (slow for many periods)
for i in range(24):
    features = create_features(df)  # ~5ms
    prediction = model.predict()     # ~3ms
    df = append(df, prediction)      # Update for next iteration
# Total: 24 * 8ms = ~192ms + overhead

# Prophet: DIRECT (fast for many periods)
future = make_future_dataframe(periods=24)
predictions = model.predict(future)  # ~60ms for all 24
# Total: ~60ms regardless of periods

Prediction Time Chart

PeriodsXGBoostProphetHybrid
1~10ms~50ms~50ms
12~100ms~55ms~55ms
24~200ms~60ms~60ms
72~600ms~70ms~70ms
168~1.5s~80ms~80ms
720~6s~120ms~120ms
For predictions >72 hours, Prophet is 10-50x faster than XGBoost.

Feature Engineering

XGBoost Features (60+)

  • 1-hour return: df['close'].pct_change(1)
  • 4-hour return: df['close'].pct_change(4)
  • 24-hour return: df['close'].pct_change(24)
  • 7-day return: df['close'].pct_change(168)
  • MA periods: 7, 14, 30, 50
  • EMA periods: 12, 26, 50
  • Price-to-MA ratios for each MA
  • Rolling std: 7, 14, 30 periods
  • Based on 1-hour returns
  • Middle band (20-period MA)
  • Upper band (middle + 2*std)
  • Lower band (middle - 2*std)
  • Band position (normalized)
  • RSI normalized, overbought/oversold
  • MACD difference and signal
  • High/low ratio
  • Close/open ratio
  • Volume MA (7 periods)
  • Volume ratio
  • Volume change
  • Hour of day (0-23)
  • Day of week (0-6)
  • Day of month (1-31)
  • Month (1-12)
  • Close lag: 1, 2, 3, 7, 14 periods

Prophet Features (2)

  • ds: Timestamp (automatically extracts seasonality)
  • y: Close price
Derived internally:
  • Trend component
  • Daily seasonality (24-hour patterns)
  • Weekly seasonality (7-day patterns)

Use Case Recommendations

Choose XGBoost When:

Prediction horizon is 1-72 hours
You have technical indicators (RSI, MACD, etc.)
Need highest short-term accuracy
Intraday trading strategies
Rich OHLCV historical data available
Care about direction accuracy >60%

Choose Prophet When:

Prediction horizon is >1 week
Need trend decomposition
Want native confidence intervals
Weekly/monthly planning
Seasonality analysis is important
Have missing data gaps
Need fast multi-step predictions

Choose Hybrid When:

Prediction horizon varies (1h to 1 month)
Want automatic model selection
Building a general-purpose forecasting system
Need best-of-both-worlds approach
Production deployment
Don’t want to manually choose models
Value smooth transitions between models

Code Examples

Quick Start Comparison

from models.xgboost_model import XGBoostCryptoPredictor

predictor = XGBoostCryptoPredictor()
predictor.train(df, train_size=0.8)
predictions = predictor.predict_future(df, periods=24)

print(predictions['predicted_price'])

Backtesting Comparison

from models.xgboost_model import backtest_model, XGBoostCryptoPredictor
from models.prophet_model import backtest_prophet, ProphetCryptoPredictor

# XGBoost backtest
xgb_predictor = XGBoostCryptoPredictor()
xgb_results = backtest_model(df, xgb_predictor, train_size=0.8)
print(f"XGBoost Test MAPE: {xgb_results['metrics']['test_mape']:.2f}%")

# Prophet backtest
prophet_predictor = ProphetCryptoPredictor()
prophet_results = backtest_prophet(df, prophet_predictor, test_periods=168)
print(f"Prophet Test MAPE: {prophet_results['test_metrics']['mape']:.2f}%")

# Compare direction accuracy
print(f"XGBoost Direction: {xgb_results['metrics']['test_direction_accuracy']:.2f}%")
print(f"Prophet Direction: {prophet_results['test_metrics']['direction_accuracy']:.2f}%")

Decision Flowchart

Summary Table

CriteriaXGBoostProphetHybrid
Best Horizon1-72h1w-1mAll
Accuracy (Short)⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Accuracy (Long)⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Training Speed⭐⭐⭐⭐⭐⭐⭐⭐⭐
Prediction Speed⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Interpretability⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Ease of Use⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Memory Usage⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Confidence Intervals⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Feature EngineeringAuto (60+)None neededAuto (60+)
Data Requirements1000+1000+1000+
Production Ready

Final Recommendations

For Most Users

Use the Hybrid model - it automatically selects the best approach for your horizon and provides optimal predictions across all timeframes.

For High-Frequency Traders

Use XGBoost - maximum short-term accuracy for intraday strategies.

For Strategic Planning

Use Prophet - clear trend analysis and seasonality for long-term decisions.

Next Steps

XGBoost Details

Deep dive into XGBoost model

Prophet Details

Deep dive into Prophet model

Hybrid Details

Deep dive into Hybrid model

Build docs developers (and LLMs) love