KernelCache class stores and manages evaluations of the kernel-induced Gram matrix for efficient repeated access during SVM training and inference.
Overview
For a fixed dataset and kernel function k, the Gram matrix is defined as:- Lazy evaluation (computes entries on-demand)
- Symmetric storage (exploits K_ = K_)
- Full precomputation for batch operations
- Efficient numerical access via Eigen dense matrices
KernelCache
Constructor
Constructs a kernel cache for a dataset and kernel function.Input samples (dataset)
Kernel function to use for computing Gram matrix entries
Methods
size()
Returns the number of samples in the cached dataset.Number of samples
operator()
Accesses a single Gram matrix entry with lazy evaluation.Row index
Column index
Gram matrix entry K_ = k(x_i, x_j)
gram_matrix()
Accesses the full Gram matrix, ensuring all entries are computed.Reference to the complete Gram matrix (Eigen::MatrixXd)
precompute()
Forces computation of all kernel evaluations upfront.kernel()
Accesses the underlying kernel function.Reference to the kernel function used by this cache
Type aliases
Matrix=Eigen::MatrixXd- Dense matrix type for storing the Gram matrix
Example usage
Performance considerations
Lazy evaluation
By default, entries are computed on first access. This is efficient when:- You only need a subset of the Gram matrix
- Memory is limited
- The dataset is large
Precomputation
Useprecompute() or gram_matrix() when:
- You need the entire Gram matrix
- You’ll access entries multiple times
- Training time is critical and you want to avoid repeated lazy checks
Symmetry exploitation
The cache automatically exploits kernel symmetry: when computing K_, it also stores K_. This reduces the number of kernel evaluations by approximately 50%.Integration with SVM
TheKernelCache is typically used internally by SVM implementations to avoid recomputing kernel values during iterative optimization algorithms like Sequential Minimal Optimization (SMO).