Skip to main content

Overview

The LinearRegression class implements ordinary least squares (OLS) and optionally L2-regularized (Ridge) linear regression. It automatically selects the optimal solving strategy based on problem geometry and condition number. Namespace: mlpp::regression Template parameter:
  • Scalar - Floating-point type (float, double, long double). Defaults to double.

Mathematical formulation

Solves the optimization problem:
min_w  (1/2n) ||Xw - y||² + (λ/2) ||w||²
The solver automatically chooses the best strategy:
  • n >> d: Normal equations via Cholesky decomposition (fast, O(nd² + d³))
  • d >> n: Thin SVD with closed-form solution (numerically stable)
  • Ill-conditioned: Falls back to JacobiSVD with full pivoting
Features are automatically standardized internally, and coefficients are returned in the original feature space.

Constructor

fit_intercept
bool
default:"true"
Whether to fit a bias term
regularization
Scalar
default:"0"
L2 penalty λ ≥ 0. Default 0 gives pure OLS regression
method
SolveMethod
default:"SolveMethod::Auto"
Linear solver strategy:
  • SolveMethod::Auto - Chosen automatically (recommended)
  • SolveMethod::Cholesky - Normal equations via LDLT
  • SolveMethod::SVD - Thin BDCSVD
  • SolveMethod::JacobiSVD - Full-pivoting JacobiSVD (slowest, most stable)

Methods

fit

void fit(const Matrix& X, const Vector& y)
Fit the linear regression model to training data.
X
Matrix
Feature matrix with shape (n_samples, n_features), row-major order
y
Vector
Target vector with length n_samples

predict

Vector predict(const Matrix& X) const
Predict target values for new samples.
X
Matrix
Feature matrix with shape (n_samples, n_features)
return
Vector
Predicted values with length n_samples

score

Scalar score(const Matrix& X, const Vector& y) const
Compute the coefficient of determination R².
R² = 1 - SS_res / SS_tot
X
Matrix
Feature matrix
y
Vector
True target values
return
Scalar
R² score in the range (-∞, 1]. Higher values indicate better fit

residuals

Vector residuals(const Matrix& X, const Vector& y) const
Compute residual vector on data: e = y - Xw - b
X
Matrix
Feature matrix
y
Vector
True target values
return
Vector
Residual vector with length n_samples. Throws if model is not fitted

gradient

Vector gradient(const Matrix& X, const Vector& y) const
Compute gradient of the regularized MSE loss with respect to weights:
∇L = (1/n) Xᵀ(Xw - y) + λw
X
Matrix
Feature matrix
y
Vector
Target values
return
Vector
Gradient vector evaluated at current coefficients

coefficients

const Vector& coefficients() const
return
Vector
Coefficient vector in original (unscaled) feature space with length n_features

intercept

Scalar intercept() const
return
Scalar
Intercept (bias) term. Returns 0 if fit_intercept is false

is_fitted

bool is_fitted() const noexcept
return
bool
True after a successful call to fit()

condition_number

Scalar condition_number() const
return
Scalar
Effective condition number of the design matrix. Only available after an SVD solve

Usage example

#include <mlpp/regression/linear_regression.hpp>
#include <Eigen/Dense>

using namespace mlpp::regression;
using Matrix = Eigen::MatrixXd;
using Vector = Eigen::VectorXd;

// Create training data
Matrix X(100, 5);  // 100 samples, 5 features
Vector y(100);     // 100 target values
// ... fill X and y with data ...

// Create and fit model with L2 regularization
LinearRegression<double> model(
    true,    // fit_intercept
    0.1,     // regularization lambda
    LinearRegression<double>::SolveMethod::Auto
);

model.fit(X, y);

// Make predictions
Matrix X_test(20, 5);
Vector predictions = model.predict(X_test);

// Evaluate model
double r2 = model.score(X, y);
std::cout << "R² score: " << r2 << std::endl;

// Access learned parameters
const Vector& coef = model.coefficients();
double bias = model.intercept();
double cond = model.condition_number();

Type aliases

using Matrix = Eigen::Matrix<Scalar, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
using Vector = Eigen::Matrix<Scalar, Eigen::Dynamic, 1>;
using Index = Eigen::Index;

Build docs developers (and LLMs) love