DecisionTreeClassifier
A non-parametric supervised learning method that learns simple decision rules inferred from the data features.
Uses CART (Classification and Regression Trees) algorithm with Gini impurity for classification.
Constructor
new DecisionTreeClassifier(options?: {
maxDepth?: number;
minSamplesSplit?: number;
minSamplesLeaf?: number;
maxFeatures?: number;
randomState?: number;
})
Maximum depth of the tree. Controls model complexity.
Minimum number of samples required to split an internal node.
Minimum number of samples required to be at a leaf node.
Number of features to consider when looking for the best split. If undefined, uses all features.
Random seed for reproducibility.
Methods
fit
fit(X: Tensor, y: Tensor): this
Build a decision tree classifier from the training set.
predict
predict(X: Tensor): Tensor
Predict class labels for samples in X.
predictProba
predictProba(X: Tensor): Tensor
Predict class probabilities for samples in X.
Returns: Class probability matrix of shape (n_samples, n_classes)
score
score(X: Tensor, y: Tensor): number
Return the mean accuracy on the given test data and labels.
Properties
Unique class labels discovered during fitting.
Example
import { DecisionTreeClassifier } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';
const X = tensor([[1, 2], [3, 4], [5, 6], [7, 8]]);
const y = tensor([0, 0, 1, 1]);
const clf = new DecisionTreeClassifier({ maxDepth: 3 });
clf.fit(X, y);
const predictions = clf.predict(X);
DecisionTreeRegressor
Decision Tree Regressor using MSE reduction to find optimal splits.
Constructor
new DecisionTreeRegressor(options?: {
maxDepth?: number;
minSamplesSplit?: number;
minSamplesLeaf?: number;
maxFeatures?: number;
randomState?: number;
})
Maximum depth of the tree.
Minimum samples required to split an internal node.
Minimum samples required at a leaf node.
Number of features to consider for best split.
Random seed for reproducibility.
Methods
fit
fit(X: Tensor, y: Tensor): this
Build a decision tree regressor from the training set.
predict
predict(X: Tensor): Tensor
Predict target values for samples in X.
score
score(X: Tensor, y: Tensor): number
Return the R² score on the given test data.
Returns: R² = 1 - SS_res / SS_tot
Example
import { DecisionTreeRegressor } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';
const X = tensor([[1], [2], [3], [4]]);
const y = tensor([1.5, 2.5, 3.5, 4.5]);
const reg = new DecisionTreeRegressor({ maxDepth: 5 });
reg.fit(X, y);
const predictions = reg.predict(X);
RandomForestClassifier
An ensemble of decision trees trained on random subsets of data and features. Predictions are made by majority voting.
Constructor
new RandomForestClassifier(options?: {
nEstimators?: number;
maxDepth?: number;
minSamplesSplit?: number;
minSamplesLeaf?: number;
maxFeatures?: "sqrt" | "log2" | number;
bootstrap?: boolean;
randomState?: number;
})
Number of trees in the forest.
Maximum depth of each tree.
Minimum samples to split a node.
Minimum samples at a leaf.
options.maxFeatures
string | number
default:"sqrt"
Number of features to consider: ‘sqrt’, ‘log2’, or integer.
Whether to use bootstrap samples.
Random seed for reproducibility.
Methods
fit
fit(X: Tensor, y: Tensor): this
Fit the random forest classifier. Builds an ensemble of decision trees.
predict
predict(X: Tensor): Tensor
Predict class labels via majority voting.
predictProba
predictProba(X: Tensor): Tensor
Predict class probabilities by averaging predictions from all trees.
score
score(X: Tensor, y: Tensor): number
Return mean accuracy.
Properties
Example
import { RandomForestClassifier } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';
const clf = new RandomForestClassifier({ nEstimators: 100 });
clf.fit(X_train, y_train);
const predictions = clf.predict(X_test);
RandomForestRegressor
An ensemble of decision tree regressors. Predictions are averaged across all trees.
Constructor
new RandomForestRegressor(options?: {
nEstimators?: number;
maxDepth?: number;
minSamplesSplit?: number;
minSamplesLeaf?: number;
maxFeatures?: "sqrt" | "log2" | number;
bootstrap?: boolean;
randomState?: number;
})
Number of trees in the forest.
Maximum depth of each tree.
Minimum samples to split a node.
Minimum samples at a leaf.
options.maxFeatures
string | number
default:"1.0"
Number of features to consider: ‘sqrt’, ‘log2’, 1.0 (all), or integer.
Whether to use bootstrap samples.
Methods
fit
fit(X: Tensor, y: Tensor): this
Fit the random forest regressor.
predict
predict(X: Tensor): Tensor
Predict target values by averaging predictions from all trees.
score
score(X: Tensor, y: Tensor): number
Return R² score.
Example
import { RandomForestRegressor } from 'deepbox/ml';
import { tensor } from 'deepbox/ndarray';
const reg = new RandomForestRegressor({ nEstimators: 100 });
reg.fit(X_train, y_train);
const predictions = reg.predict(X_test);