Skip to main content

Overview

The PolynomialRegression class composes polynomial feature expansion with linear regression to fit polynomial models. It automatically expands input features to polynomial terms and fits a linear model in the expanded space. Namespace: mlpp::regression Template parameter:
  • Scalar - Floating-point type (float, double, long double). Defaults to double.

Mathematical formulation

The model fits:
ŷ = Σ_{|α|≤D} w_α ∏ xⱼ^αⱼ  +  b
Two expansion modes are available:
  • Pure powers (include_interactions=false): Appends x₁, x₁², …, x₁ᴰ, x₂, …, xᵈᴰ
  • Full interaction terms (include_interactions=true): All monomials with degree sum ≤ D

Constructor

degree
unsigned
default:"2"
Maximum polynomial degree D ≥ 1
include_interactions
bool
default:"false"
Whether to include cross-feature monomials (e.g., x₁·x₂). When false, only pure powers are used
fit_intercept
bool
default:"true"
Whether to fit a bias term. Forwarded to the underlying LinearRegression
regularization
Scalar
default:"0"
L2 penalty λ ≥ 0. Forwarded to the underlying LinearRegression
method
LinearRegression<Scalar>::SolveMethod
default:"SolveMethod::Auto"
Solver strategy forwarded to LinearRegression:
  • SolveMethod::Auto - Chosen automatically (recommended)
  • SolveMethod::Cholesky - Normal equations via LDLT
  • SolveMethod::SVD - Thin BDCSVD
  • SolveMethod::JacobiSVD - Full-pivoting JacobiSVD

Methods

fit

void fit(const Matrix& X, const Vector& y)
Expand features to polynomial degree D and fit the underlying linear model.
X
Matrix
Feature matrix with shape (n_samples, n_features)
y
Vector
Target vector with length n_samples

predict

Vector predict(const Matrix& X) const
Expand features and predict target values.
X
Matrix
Feature matrix with shape (n_samples, n_features)
return
Vector
Predicted values with length n_samples

score

Scalar score(const Matrix& X, const Vector& y) const
Compute R² on the provided data.
X
Matrix
Feature matrix
y
Vector
True target values
return
Scalar
R² score in the range (-∞, 1]

residuals

Vector residuals(const Matrix& X, const Vector& y) const
Compute residual vector e = y - ŷ.
X
Matrix
Feature matrix
y
Vector
True target values
return
Vector
Residual vector with length n_samples

coefficients

const Vector& coefficients() const
return
Vector
Coefficients of the underlying linear model in the expanded feature space

intercept

Scalar intercept() const
return
Scalar
Intercept of the underlying linear model

is_fitted

bool is_fitted() const noexcept
return
bool
True after a successful call to fit()

features

const PolynomialFeatures<Scalar>& features() const noexcept
return
PolynomialFeatures<Scalar>
Reference to the underlying polynomial feature transformer

regressor

const LinearRegression<Scalar>& regressor() const noexcept
return
LinearRegression<Scalar>
Reference to the underlying linear regression model

PolynomialFeatures

The underlying feature transformer can also be used independently:

Constructor

degree
unsigned
default:"2"
Maximum monomial degree D ≥ 1
include_bias
bool
default:"true"
Whether to prepend a constant-1 column
include_interactions
bool
default:"false"
Whether to include cross-feature monomials

transform

Matrix transform(const Matrix& X) const
Expand input matrix into polynomial feature space.
X
Matrix
Input matrix with shape (n_samples, n_features)
return
Matrix
Expanded matrix with shape (n_samples, output_dim(n_features))

output_dim

std::size_t output_dim(std::size_t n_features) const
n_features
std::size_t
Number of input features
return
std::size_t
Number of output columns after polynomial expansion

Usage example

#include <mlpp/regression/polynomial_regression.hpp>
#include <Eigen/Dense>

using namespace mlpp::regression;
using Matrix = Eigen::MatrixXd;
using Vector = Eigen::VectorXd;

// Create training data
Matrix X(100, 2);  // 100 samples, 2 features
Vector y(100);     // 100 target values
// ... fill X and y with data ...

// Create polynomial regression model with degree 3
// Include interaction terms (e.g., x₁·x₂, x₁²·x₂)
PolynomialRegression<double> model(
    3,      // degree
    true,   // include_interactions
    true,   // fit_intercept
    0.01    // regularization
);

model.fit(X, y);

// Make predictions
Matrix X_test(20, 2);
Vector predictions = model.predict(X_test);

// Evaluate model
double r2 = model.score(X, y);
std::cout << "R² score: " << r2 << std::endl;

// Access expanded feature space dimensions
const auto& feat_transform = model.features();
std::cout << "Expanded features: " 
          << feat_transform.output_dim(2) << std::endl;

// Use PolynomialFeatures independently
PolynomialFeatures<double> poly(2, true, false);
Matrix X_poly = poly.transform(X);

Type aliases

using Matrix = Eigen::Matrix<Scalar, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
using Vector = Eigen::Matrix<Scalar, Eigen::Dynamic, 1>;

Build docs developers (and LLMs) love