Skip to main content

LinearRegression

Ordinary Least Squares Linear Regression. Fits a linear model with coefficients w = (w1, …, wp) to minimize the residual sum of squares between the observed targets and the targets predicted by the linear approximation.

Constructor

new LinearRegression(options?: {
  fitIntercept?: boolean;
  normalize?: boolean;
  copyX?: boolean;
})
options.fitIntercept
boolean
default:"true"
Whether to calculate the intercept for this model.
options.normalize
boolean
default:"false"
Whether to normalize features before regression. If true, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm.
options.copyX
boolean
default:"true"
Whether to copy X or overwrite it (if false, X may be overwritten).

Methods

fit

fit(X: Tensor, y: Tensor): this
Fit linear model using Ordinary Least Squares.
X
Tensor
required
Training data of shape (n_samples, n_features)
y
Tensor
required
Target values of shape (n_samples,)
Returns: The fitted estimator Throws: ShapeError, DataValidationError

predict

predict(X: Tensor): Tensor
Predict using the linear model.
X
Tensor
required
Samples of shape (n_samples, n_features)
Returns: Predicted values of shape (n_samples,)

score

score(X: Tensor, y: Tensor): number
Return the coefficient of determination R² of the prediction. R² = 1 - (SS_res / SS_tot), where SS_res is the residual sum of squares and SS_tot is the total sum of squares. Best possible score is 1.0. Returns: R² score

Properties

coef
Tensor
Model coefficients (weights) of shape (n_features,)
intercept
Tensor | undefined
Independent term (bias/intercept) in the linear model

Example

import { LinearRegression } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';

const X = tensor([[1, 1], [1, 2], [2, 2], [2, 3]]);
const y = tensor([1, 2, 2, 3]);

const model = new LinearRegression({ fitIntercept: true });
model.fit(X, y);

const X_test = tensor([[3, 5]]);
const predictions = model.predict(X_test);
const score = model.score(X, y);

Ridge

Ridge Regression (L2 Regularized Linear Regression). Ridge regression addresses multicollinearity by adding a penalty term (L2 regularization) to the loss function.

Constructor

new Ridge(options?: {
  alpha?: number;
  fitIntercept?: boolean;
  normalize?: boolean;
  solver?: "auto" | "svd" | "cholesky" | "lsqr" | "sag";
  maxIter?: number;
  tol?: number;
})
options.alpha
number
default:"1.0"
Regularization strength. Must be >= 0. Larger values specify stronger regularization.
options.fitIntercept
boolean
default:"true"
Whether to calculate the intercept.
options.normalize
boolean
default:"false"
Whether to normalize features before regression.
options.solver
string
default:"auto"
Solver to use: ‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, or ‘sag’.
options.maxIter
number
default:"1000"
Maximum number of iterations for iterative solvers.
options.tol
number
default:"1e-4"
Tolerance for stopping criterion.

Methods

fit

fit(X: Tensor, y: Tensor): this
Fit Ridge regression model. Time Complexity: O(n²p + p³) where n = samples, p = features

predict

predict(X: Tensor): Tensor
Predict using the Ridge regression model.

score

score(X: Tensor, y: Tensor): number
Return the R² score.

Properties

coef
Tensor
Model coefficients of shape (n_features,)
intercept
number
Intercept value
nIter
number | undefined
Number of iterations run by the solver (for iterative solvers)

Example

import { Ridge } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';

const model = new Ridge({ alpha: 0.5 });
model.fit(X_train, y_train);
const predictions = model.predict(X_test);

Lasso

Lasso Regression (L1 Regularized Linear Regression). Lasso performs both regularization and feature selection by adding an L1 penalty that can drive coefficients exactly to zero.

Constructor

new Lasso(options?: {
  alpha?: number;
  fitIntercept?: boolean;
  normalize?: boolean;
  maxIter?: number;
  tol?: number;
  warmStart?: boolean;
  positive?: boolean;
  selection?: "cyclic" | "random";
  randomState?: number;
})
options.alpha
number
default:"1.0"
Regularization strength. Must be >= 0. Controls sparsity of solution.
options.fitIntercept
boolean
default:"true"
Whether to calculate the intercept.
options.normalize
boolean
default:"false"
Whether to normalize features before regression.
options.maxIter
number
default:"1000"
Maximum iterations for coordinate descent.
options.tol
number
default:"1e-4"
Tolerance for convergence. Smaller = more precise but slower.
options.warmStart
boolean
default:"false"
Whether to reuse previous solution as initialization.
options.positive
boolean
default:"false"
Whether to force coefficients to be positive.
options.selection
string
default:"cyclic"
Coordinate selection: ‘cyclic’ or ‘random’.
options.randomState
number
Random seed for reproducibility.

Methods

fit

fit(X: Tensor, y: Tensor): this
Fit Lasso regression model using Coordinate Descent. Time Complexity: O(k * n * p) where k = iterations, n = samples, p = features

predict

predict(X: Tensor): Tensor
Predict using the Lasso regression model.

score

score(X: Tensor, y: Tensor): number
Return the R² score.

Properties

coef
Tensor
Model coefficients. Many will be exactly zero due to L1 regularization (sparsity).
intercept
number
Intercept value
nIter
number | undefined
Number of iterations until convergence

Example

import { Lasso } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';

const model = new Lasso({ alpha: 0.1, maxIter: 1000 });
model.fit(X_train, y_train);

// Many coefficients will be exactly 0
console.log(model.coef);

const predictions = model.predict(X_test);

LogisticRegression

Logistic Regression (Binary and Multiclass Classification). Logistic regression uses the logistic (sigmoid) function to model the probability of class membership.

Constructor

new LogisticRegression(options?: {
  penalty?: "l2" | "none";
  C?: number;
  tol?: number;
  maxIter?: number;
  fitIntercept?: boolean;
  learningRate?: number;
  multiClass?: "ovr" | "auto";
})
options.penalty
string
default:"l2"
Regularization type: ‘l2’ or ‘none’.
options.C
number
default:"1.0"
Inverse regularization strength. Must be > 0. Smaller values = stronger regularization.
options.tol
number
default:"1e-4"
Tolerance for stopping criterion.
options.maxIter
number
default:"100"
Maximum number of iterations.
options.fitIntercept
boolean
default:"true"
Whether to fit intercept.
options.learningRate
number
default:"0.1"
Learning rate for gradient descent.
options.multiClass
string
default:"auto"
Multiclass strategy: ‘ovr’ (One-vs-Rest) or ‘auto’.

Methods

fit

fit(X: Tensor, y: Tensor): this
Fit logistic regression model. Supports binary and multiclass (One-vs-Rest) classification.

predict

predict(X: Tensor): Tensor
Predict class labels for samples.

predictProba

predictProba(X: Tensor): Tensor
Predict class probabilities for samples. Returns: Probabilities of shape (n_samples, n_classes)

score

score(X: Tensor, y: Tensor): number
Return the mean accuracy on the given test data.

Properties

coef
Tensor
Model coefficients (weights). Shape (n_features,) for binary, (n_classes, n_features) for multiclass.
intercept
number | number[]
Intercept term. Scalar for binary, array for multiclass.
classes
Tensor | undefined
Unique class labels discovered during fitting.

Example

import { LogisticRegression } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';

// Binary classification
const model = new LogisticRegression({ C: 1.0, maxIter: 100 });
model.fit(X_train, y_train);

const predictions = model.predict(X_test);
const probabilities = model.predictProba(X_test);

// Multiclass classification
const multiModel = new LogisticRegression({
  multiClass: 'ovr'
});
multiModel.fit(X_train_multi, y_train_multi);

Build docs developers (and LLMs) love