Skip to main content

Overview

The RidgeRegression class implements ridge regression with L2 regularization. It uses standard C++ vectors for data representation. Template parameter:
  • T - Numeric type (typically float or double). Defaults to double.

Constructor

lambda
T
default:"1"
L2 regularization parameter λ. Higher values increase regularization strength

Methods

fit

void fit(const std::vector<std::vector<T>>& X,
         const std::vector<T>& y)
Fit the ridge regression model to training data.
X
std::vector<std::vector<T>>
Feature matrix where each inner vector represents one sample’s features
y
std::vector<T>
Target vector with one value per sample

predict

std::vector<T> predict(const std::vector<std::vector<T>>& X) const
Predict target values for new samples.
X
std::vector<std::vector<T>>
Feature matrix for samples to predict
return
std::vector<T>
Predicted values, one per sample

weights

const std::vector<T>& weights() const
return
std::vector<T>
Learned weight vector

set_lambda

void set_lambda(T lambda)
Update the regularization parameter.
lambda
T
New L2 regularization parameter value

get_lambda

T get_lambda() const
return
T
Current L2 regularization parameter value

Usage example

#include <ridge_regression.h>
#include <vector>
#include <iostream>

// Create training data
std::vector<std::vector<double>> X = {
    {1.0, 2.0, 3.0},
    {2.0, 3.0, 4.0},
    {3.0, 4.0, 5.0},
    {4.0, 5.0, 6.0}
};

std::vector<double> y = {2.5, 4.5, 6.5, 8.5};

// Create model with lambda = 0.5
RidgeRegression<double> model(0.5);

// Fit the model
model.fit(X, y);

// Make predictions
std::vector<std::vector<double>> X_test = {
    {5.0, 6.0, 7.0},
    {6.0, 7.0, 8.0}
};

std::vector<double> predictions = model.predict(X_test);

for (size_t i = 0; i < predictions.size(); ++i) {
    std::cout << "Prediction " << i << ": " << predictions[i] << std::endl;
}

// Access weights
const auto& w = model.weights();

// Update lambda
model.set_lambda(1.0);
model.fit(X, y);  // Refit with new lambda

Mathematical formulation

Ridge regression solves the optimization problem:
min_w  ||Xw - y||² + λ||w||²
where λ is the regularization parameter that controls the trade-off between fitting the training data and keeping the weights small.

Build docs developers (and LLMs) love