GradientBoostingClassifier
Gradient Boosting Classifier using shallow regression trees. Supports both binary and multiclass classification.
- Binary: Optimizes log loss using sigmoid function
- Multiclass: Uses One-vs-Rest (OvR) strategy, training one binary model per class
Constructor
new GradientBoostingClassifier(options?: {
nEstimators?: number;
learningRate?: number;
maxDepth?: number;
minSamplesSplit?: number;
})
Number of boosting stages (trees) to train.
Learning rate shrinks the contribution of each tree. Smaller values require more trees.
Maximum depth of individual regression trees.
Minimum samples required to split an internal node.
Methods
fit
fit(X: Tensor, y: Tensor): this
Fit the gradient boosting classifier on training data.
Builds an additive model by sequentially fitting regression trees to the pseudo-residuals (gradient of log loss).
Training data of shape (n_samples, n_features)
Target class labels of shape (n_samples,). Must contain at least 2 classes.
Returns: The fitted estimator
predict
predict(X: Tensor): Tensor
Predict class labels for samples in X.
Returns: Predicted class labels of shape (n_samples,)
predictProba
predictProba(X: Tensor): Tensor
Predict class probabilities for samples in X.
Returns: Class probability matrix of shape (n_samples, n_classes)
score
score(X: Tensor, y: Tensor): number
Return the mean accuracy on the given test data and labels.
Returns: Accuracy score in range [0, 1]
Example
import { GradientBoostingClassifier } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';
const X = tensor([[1, 2], [2, 3], [3, 1], [4, 2]]);
const y = tensor([0, 0, 1, 1]);
const gbc = new GradientBoostingClassifier({ nEstimators: 100 });
gbc.fit(X, y);
const predictions = gbc.predict(X);
const probabilities = gbc.predictProba(X);
GradientBoostingRegressor
Gradient Boosting Regressor that builds an additive model in a forward stage-wise fashion using regression trees as weak learners. Optimizes squared error loss.
Constructor
new GradientBoostingRegressor(options?: {
nEstimators?: number;
learningRate?: number;
maxDepth?: number;
minSamplesSplit?: number;
})
Number of boosting stages (trees).
Learning rate shrinks the contribution of each tree.
Maximum depth of individual regression trees.
Minimum samples required to split.
Methods
fit
fit(X: Tensor, y: Tensor): this
Fit the gradient boosting regressor on training data.
Builds an additive model by sequentially fitting regression trees to the negative gradient (residuals) of the loss function.
Training data of shape (n_samples, n_features)
Target values of shape (n_samples,)
Returns: The fitted estimator
predict
predict(X: Tensor): Tensor
Predict target values for samples in X.
Aggregates the initial prediction and the scaled contributions of all trees.
Returns: Predicted values of shape (n_samples,)
score
score(X: Tensor, y: Tensor): number
Return the R² score on the given test data and target values.
Returns: R² score (best possible is 1.0, can be negative)
Example
import { GradientBoostingRegressor } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';
const X = tensor([[1], [2], [3], [4], [5]]);
const y = tensor([1.2, 2.1, 2.9, 4.0, 5.1]);
const gbr = new GradientBoostingRegressor({ nEstimators: 100 });
gbr.fit(X, y);
const predictions = gbr.predict(X);