GradTensor Class
Tensor wrapper that records a computation graph for reverse-mode autodiff.Tracks operations and enables automatic gradient computation through backpropagation.
Properties
The underlying tensor containing the data.
Whether this tensor requires gradient computation.
Accumulated gradient for this tensor. Returns null if no gradient has been computed.
The dimensions of the underlying tensor.
Data type of the tensor elements.
Total number of elements.
Number of dimensions.
Gradient Operations
Backpropagate gradients from this node through the recorded graph.Parameters:
grad(Tensor) - Optional seed gradient (default: ones)
Reset accumulated gradients to zero.
Create a new GradTensor that doesn’t track gradients.Returns: GradTensor with
requiresGrad=falseManually set the gradient for this tensor.Parameters:
grad(Tensor) - Gradient tensor
Enable or disable gradient tracking.Parameters:
value(boolean) - Whether to track gradients
Arithmetic Operations
Element-wise addition with automatic differentiation.Parameters:
other(GradTensor) - Tensor to add
Element-wise subtraction with automatic differentiation.Parameters:
other(GradTensor) - Tensor to subtract
Element-wise multiplication with automatic differentiation.Parameters:
other(GradTensor) - Tensor to multiply
Element-wise division with automatic differentiation.Parameters:
other(GradTensor) - Divisor tensor
Element-wise negation with automatic differentiation.Returns: GradTensor result
Element-wise absolute value with automatic differentiation.Returns: GradTensor result
Element-wise power with automatic differentiation.Parameters:
exponent(number) - Exponent value
Element-wise square root with automatic differentiation.Returns: GradTensor result
Element-wise square with automatic differentiation.Returns: GradTensor result
Clip values with automatic differentiation.Parameters:
minVal(number) - Minimum valuemaxVal(number) - Maximum value
Mathematical Functions
Element-wise exponential with automatic differentiation.Returns: GradTensor result
Element-wise natural logarithm with automatic differentiation.Returns: GradTensor result
Element-wise hyperbolic tangent with automatic differentiation.Returns: GradTensor result
Activation Functions
ReLU activation with automatic differentiation.Returns: GradTensor result
Leaky ReLU activation with automatic differentiation.Parameters:
negativeSlope(number) - Slope for negative values (default: 0.01)
Sigmoid activation with automatic differentiation.Returns: GradTensor result
Reduction Operations
Sum with automatic differentiation.Parameters:
axis(Axis) - Optional axis to reduce alongkeepdims(boolean) - Keep reduced dimensions (default: false)
Mean with automatic differentiation.Parameters:
axis(Axis) - Optional axis to reduce alongkeepdims(boolean) - Keep reduced dimensions (default: false)
Maximum with automatic differentiation.Parameters:
axis(Axis) - Optional axis to reduce alongkeepdims(boolean) - Keep reduced dimensions (default: false)
Minimum with automatic differentiation.Parameters:
axis(Axis) - Optional axis to reduce alongkeepdims(boolean) - Keep reduced dimensions (default: false)
Shape Operations
Reshape with automatic differentiation.Parameters:
newShape(Shape) - Desired shape
Flatten to 1D with automatic differentiation.Returns: GradTensor result
Transpose with automatic differentiation.Parameters:
axes(readonly number[]) - Optional axis permutation
Create a view with automatic differentiation.Parameters:
shape(Shape) - Desired shapestrides(readonly number[]) - Optional stridesoffset(number) - Optional offset
Indexing Operations
Slice with automatic differentiation.Parameters:
...ranges(SliceRange[]) - Slice specifications
Gather with automatic differentiation.Parameters:
indices(GradTensor) - Index tensoraxis(Axis) - Axis to gather along
Linear Algebra
Matrix multiplication with automatic differentiation.Parameters:
other(GradTensor) - Right-hand tensor
Creation Functions
Create a GradTensor from data (learnable parameter).Parameters:
data(number | number[] | number[][] | number[][][]) - Initial valuesoptions(GradTensorOptions) - Optional{ requiresGrad?, dtype? }
Create a GradTensor from an existing Tensor.Parameters:
t(Tensor) - Input tensoroptions(GradTensorOptions) - Optional{ requiresGrad?, dtype? }
Create a scalar GradTensor.Parameters:
value(number) - Scalar valueoptions(GradTensorOptions) - Optional{ requiresGrad?, dtype? }
Context Management
Temporarily disable gradient tracking.Parameters:Note: Only accepts synchronous callbacks. Async functions will throw an error.
callback(() => T) - Synchronous function to execute without gradient tracking
Gradient-Aware Operations
These are specialized versions of operations with gradient support:Softmax with gradient support.Parameters:
t(Tensor | GradTensor) - Input tensoraxis(Axis) - Axis to compute along (default: -1)
Log-softmax with gradient support.Parameters:
t(Tensor | GradTensor) - Input tensoraxis(Axis) - Axis to compute along (default: -1)
Variance with gradient support.Parameters:
t(Tensor | GradTensor) - Input tensoraxis(Axis) - Optional axis to reduce alongkeepdims(boolean) - Keep reduced dimensions (default: false)
Dropout with gradient support.Parameters:
t(Tensor | GradTensor) - Input tensorrate(number) - Dropout probabilitytraining(boolean) - Whether in training mode
Image-to-column transformation with gradient support.Parameters:
input(Tensor | GradTensor) - Input tensorkernelSize([number, number]) - Kernel dimensionsstride(number | [number, number]) - Stridepadding(number | [number, number]) - Padding
Types
Options for creating GradTensors.
Static Methods
Check if a value is a GradTensor.Parameters:
value(unknown) - Value to check